Disk usages Linux commands for Hadoop in the Big Data environment | Commands | Linux | Big Data

In this article, we will explain the Hadoop admin disk usage commands for daily check health status in the cluster level.




Why disk usage commands are more specific for this role?

Because daily we need to check the each machine space using disk commands for admins and more this will get alerts after reached the threshold points in the cluster. We will receive disk space alerts to our e-mail inbox.

Disk usage commands:

In every companies or corporates branches are arranged disk related machines. If the machines are occupied with more unnecessary data, we need to clear it. Here are some of useful disk related commands.

df

It means that “disk file system” . We will get full information of disk space like below snapshot.

df -hT

disk space statistics in “human readable” format. It will showing /home file system in human readable. It should be either Kilo Bytes, Mega Bytes or Giga Bytes etc.




df -k

It will provide all file system information with Kilo Bytes.

df -i

It is showing all the information of number of used inodes and their % for the each and every file.






The above commands are very useful for Linux or Cloud admins. Before that we need to clear disk related alerts, first will check disk space and then proceed with whether it is clear or not. These are all very important and useful for all.

Leave a Reply

Your email address will not be published. Required fields are marked *