I found that AWS Glue set up executor's instance with memory limit to 5 Gb --conf spark.executor.memory=5g
and some times, on a big datasets it fails with java.lang.OutOfMemoryError
. The same is for driver instance --spark.driver.memory=5g
.
Is there any option to increase this value?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…