Spark Performance Tuning Error in the Hadoop Cluster : Troubleshooting |OutOfMemoryError





While writing Spark programming getting below error in the Hadoop cluster.

Spark Performance Tuning Errors:

java.lang.illegalArgumentException: error while instatiatiing 'org.apache.spark.sql.hive.HiveSessionState Builder. Spark.sql.autoBracastJoinThreshold should be long but was at org.apache.spark.internal.config.ConfigHelper. Work for joins using Dataset's join operator in the Spark error.

After that, we provided spark values like below:

Spark values:

'Spark_application_master_core','4'
'spark_core_perexecutor,'5'
'spark_executor_instance','80,
'spark_executor_memory''4g'
'spark_executor_memory_overhead','2048'
'spark_application_master_memory','2g'

Spark fails with memory issues like the below error:

java.lang.OutOfMemoryError: GC overhead limit exceeded





If we get this type of error in the Spark programming simply increase the Spark executor Memory. If running in Yarn then kill the Job using Yarn command:

yarn application - list
yarn application -kill application_id

If that job is running on the Extract Transform Load (ETL) like Informatica, Talend tools then simply kill it.
Then setting up the JVM related parameters in the ETL tools after that increases the Spark executor memory using the below parameters:

--conf "spark.executor.memory = 12g"
--conf "spark.yarn.executor.memoryoverhead = 4096"
or
-- "executor - memory = 12g"

The above Spark executory memory increase then the Spark job is executing perfectly otherwise it won’t be run properly. In the Clouder/ MapR distribution sometimes we faced these types of errors in the Cluster. Still, it facing the same issue simply restart the Spark server the Hadoop cluster.




Summary: Spark is an in-memory, cache base engine for Bigdata processing nowadays. Sometimes we are getting memory issues because of getting unexpected/ millions of data for Data processing. Here we provided a basic solution to resolve the memory overhead issues in the Cluster. Just simply add the executor memory for the next step to resolve the issue. If still facing the issue once setting up the memory then restart the Spark engine in the Clouder/ MapR distribution in the edge node server. At the time we need to check jobs in the ETL tools like Informatica/ Talend jobs because sometimes we faced environment issues for those failing jobs.