[Resolved]Dynamic partitioning is not working in Spark and Hive | Big Data | Spark | Hive

In this article, we will explain how to resolve the dynamic partitioning is not working in Spark with Hive related quarries in the production cluster.




Error:

ERROR: “Caused by: org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column” in the Spark with Hive related query for users.

 

Solution:




Here we provided simple solution for Spark dynamic error along with hive engine.

spark.hadoop.hive.exec.dynamic.partition = true

spark.hadoop.hive.exec.dynamic.partition.mode=nonstrict

The above solution is working for me. I am trying to run Spark query in the Spark SQL(Structured Query Language), at the time I am getting dynamic partitioning error in the command prompt. These type of issues getting in Hive services as well.

Summary: In Big Data cluster, we are getting large data from the source. Here we need to performance tuning in Spark or Hive like dynamic partitioning. At the time, I am getting dynamic partitioning error in the Spark SQL prompt. After that I have executed the above two properties in the prompt. Once it is done, it’s working for me. The above properties same like Hive cli. Before that I worked for Hive at the time also getting same error. Instead of Spark we replace with Hive. After that we have executed large data type of queries.




For, first property set the dynamic partition is true. Second one we need to change the dynamic partition mode in nonstrict then only it works.