[Solved]DiskErrorException: Directory is not writable: /data/hadoop/hdfs/data | Big Data | Hadoop | Error

In this article, we will explain how to resolve the DiskErrorException: Directory is not writable: /data/hadoop/hdfs/data.



Error: DiskErrorException: Directory is not writable: /data/hadoop/hdfs/data in Cloudera or Hortonworks in Hadoop cluster

[DISK]file:/data/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not writable: /data/hadoop/hdfs/data
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:125)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:99)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:128)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:44)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2020-12-16 12:41:34,186 ERROR datanode.DataNode (DataNode.java:secureMain(2692)) - Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 19, volumes configured: 20, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:220)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2493)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2540)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2685)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
2020-12-16 12:41:34,187 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2020-12-16 12:41:34,239 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:

Solution:





In case getting this type of error in Cloudera or Hortonworks then follow the below steps

Step 1. Login with Ambari Web – UI then choose the HDFS (Hadoop Distributed File System)

Step 2.After that click on “Configs” then choose Filter in for property.

Step 3. Then “dfs.datanode.failed.volumes.tolerated” set it to 1

Step 4. Once done the above configurations then restart HDFS services.

In case getting same error then proceed below solution

Solution 2:

The above resolution not working properly then it seems like disk crashed/ unmount in the infrastructure level.

Step 1 : First, take the backup for that drive then run below command.

fsck -y

This command belongs to file system checker in the disk and repairs it automatically in case any fragmentation issue is there.

Step 2: After, that reboot machine.

Summary: This type of error may cause in Big Data distributions then proceed above resolutions for simply. First, you will go with solution 1. If still issue is not resolved then go to solution 2 for permanent resolution. In case getting same error then comment it, we will provide other resolution in comment section for Cloudera or Hortonworks or MapR distributions in the Big Data environment for all Hadoop Admins.