[Resolved]Unable to create directory /tmp/resources in Hadoop and Azure | Big data | Azure | Error

In this article will explain how to resolve the unable to create directory in the Hadoop or Azure cluster in Big Data environment.



Error:

I am unable to up the Hadoop cluster and it’s service due to below error. At the time that node is completely node was down. At the time on the node all services went to down.

ErrorCode=UserErrorFailedToConnectOdbcSource,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Failed to open new session: java.lang.RuntimeException: java.io.IOException: Unable to create directory /tmp/resources'. ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Failed to open new session: java.lang.RuntimeException: java.io.IOException: Unable to create directory /tmp/resources'.,Source=Microsoft.DataTransfer.Runtime.GenericOdbcConnectors,''Type=System.Data.Odbc.OdbcException,Message=ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code:

After that below servicer were down continuously:
1.Data node
2.Hive
3.Oozie
4.Ambari metrics




Here I tried to resolve the issue with below steps:

Resolution:

  1. First, logged Ambari cluster with admin privilege
  2. We have checked all services on web-ui, it’s showing all are in down.
  3. After that I tried put maintenance mode and then restart but it’s not working properly. It goes down again.
  4. I opened CLI through putty with IP address only
  5. Then check disk space on this node with below command
  6. df -hT
  7. One of the \tmp directory fully with unnecessary files
  8. We have deleted all files after that re-run all services in Ambari/Hadoop clusters.
  9. Once it is done check disk space again with df -hT
  10. Restarted all hive or spark jobs.

The above resolution for “Failed to open new session: java.lang.RuntimeException: java.io.IOException: Unable to create directory /tmp/resources” in the Hadoop or Azure cluster with large data sets.

Here we are using Azure HDInsight cluster for Data storage and processing with HDFS for storage purpose. Hive and Spark is using for large data processing on these clusters in the huge amount of data sets.



Leave a Reply

Your email address will not be published.