In this article, we will explain how to check Apache Kafka version and Kafka cluster using commands through CLI.
Basically, in Hadoop environment we have a lot of services for data processing/transferring from source to destination. For example Spark, Kafka, Sqoop, Hive and etc.
How to check Hadoop version in Linux CLI:
How to check Apache Spark version using below command:
spark -submit --version
How to check Hive version using below command:
In Kafka version is different from other services in the Big Data environment.
First, will go with Confluent Kafka bin path like below
After that execute the below below command:
In this method we are using grep command then find out Kafka version simply.
ps -ef | grep kafka
Then it will displays all running kafka clients in the CLI and Kafka lib path. In that path it showing below jar file
kafka-clients-0.11.0.3.188.8.131.52 jar file.
Here Kafka client version is the Kafka version – 0.11.0.3.184.108.40.206
In case sometime it showing like kafka-clients-3_0.11.0.3.220.127.116.11
so here Kafka version is 0.11.0.3.18.104.22.168
The above methods are very simple for Big Data / Hadoop practices for all users. To find out Hadoop, Hive, and Spark services versions are very simple. Apart from that all services, Apache / Confluent Kafka versions are little bit difficulty but here we provided simple steps for Kafka versions using different methods.