How to setup elastic metricbeat with Kafka in Big Data | Kafka | Metricbeat | Install

In this article, we will explain how to setup elastic metricbeat with Kafka configurations  in RHEL(Red Hat Enterprise Linux ) operating system.

How to install elastic metricbeat with Kafka in  RHEL

Here we provided simple steps for elastic metribeat with Kafka configuration in Redhat OS. Basically metricbeat means that it is a CLI for common tasks, testing configurations, and loading dashboards.

Step 1: Go to yum repos directory.

cd /etc/yum.repos.d

Step 2: Then check repo files whether metricbeat repos in the directory

ls -ltr

Step 3: Create a file with a .repo extension like metricbeat.repo then open with “vi” the repo file and copy the below code

vi metricbeat.repo
name=Elastic repository for 7.x packages

Copy the above code and paste into metricbeat.repo file then save the code using below command


Then Install metricbeat using below command:

yum install merticbeat

Check the metricbeat.yml file configurations, whether configurations are configured correctly or not

vi /etc/metricbeat/metricbeat.yml

Then enable the Metricbeat modules using below command

metricbeat modules enable kafka

After that go to metribeat modules.d directory then change the some of the yml configuration files like kafka.yml, etc.

cd /etc/metricbeat/modules.d/
vi kafka.yml

After open Kafka yml file it showing localhost: 9002 to you’re hostname/IP address.

systemctl enable metricbeat

Once enabled the metribeat and getting metricbeat service like below:

Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/metricbeat.service.

The above steps successfully completed then check the metricbeat version

metricbeat version

How to check metricbeat status in RHEL?

systemctl status metricbeat

Summary: The above steps are very simple to setup elastic metricbeat in Kafka cluster in the Big Data environment. Normally,  metricbeat used for users data loading dashboards. Here we configured with Kafka for real-time /streaming data for end to end users. In this article we have explained commands from starting to ending in the big data cluster.

Leave a Reply

Your email address will not be published. Required fields are marked *