In this article, we will explain how to nullify the large data files or Solr spool log files without delete the file in the HDFS (Hadoop Distributed File System)
In the Big data environment we have large data sets along with huge amount of log files. These large log files impact to entire cluster. For example we have some Solr spool log file, it’s consuming the more data space it effects for the particular node. That node or host will impact the Hadoop services, it will automatically cluster is goes down.
How to nullify the Solr log files:
Step 1 : Login to the Solr server host with root user
Step 2 : Check which location the file exist, go to the location and list the files like below snapshot :
Step 3 : By using command “>” this file goes to empty but the file exist the present location.
Step 4 : Please check now, the space data has been deleted but the file is the present.
Step 5 : The above resolution is very simple to resolve the space issue on the Solr log files in the Big Data environment without delete any files.
Summary: In the Big Data environment Hadoop service major role for large data files, here we are using HDFS for storage purpose. Here we are using Cloudera distributions for all Hadoop services on Azure Cloud platform. Daily we are receiving huge amount of data then the log files automatically consuming the data so it impacts the hosts and cluster level services. Here we simply deleted the data not the file. Incase will delete the file it impact to Solr or any services, just here empty the file. Don’t the file using remove commands it gonna be big escalations. In the Commandstech we have provided simples steps with snapshots as well. We will simply clear the space and disk utilizations. The above steps not only for Solr, it applicable any services.