What is Spark Executor Cores | Executor Memory|Number of Executors | Performance Tuning in Spark




What is Spark Executor Cores?

The Spark executor cores property runs the number of simultaneous tasks an executor. While writing Spark program the executor can run “– executor-cores 5”. It means that each executor can run a maximum of five tasks at the same time.

What is Executor Memory?

In a Spark program, executor memory is the heap size can be managed with the — executor-memory flag or the spark.executor.memory property in Spark default configuration file (spark.default.conf). The memory property effect the number of data Spark can cache, as well as the maximum number of sizes of the shuffle data structures used for grouping aggregations, and joins.



What is the Number of Executor?

The number of Executors in Spark defines as the property control the number of executors requested from the configuration file. The –num -executors command – line flag or spark.executor.the instances configuration file in Spark.

What is Performance Tuning in Spark?

Running executors with too much memory often output in extreme garbage collection delays. 64 GB is an upper limit for a single executor.

HDFS client has difficulty with a lot of simultaneous threads. An aggressive guess is that at most five tasks per executor can achieve full write throughput, so it’s good to keep the number of cores per executor below that number.

For example: –num-executor 6 –executor 15 –executor – memory 63G -> Basically, It is not a good task (because 15 cores per executor can lead to bad HDFS I/O throughput)




Summary: Here we explained Spark executor’s cores and memory along with Spark performance tunning with example. The majorly what is the number of executors with examples in spark performance tunning.