Hadoop Architecture vs MapR Architecture

Basically, In BigData environment Hadoop is a major role for storage and processing. Coming to MapR is distribution to provide services to Eco-System. Hadoop architecture and MapR architecture have some of the difference in Storage level and Naming convention wise.

For example in In Hadoop single storage unit is called Block. But in MapR it is called Container.

Hadoop VS MapR

Coming to Architecture wise somehow the differences in both:
In Hadoop Architecture based on the Master Node (Name node) and Slave (Data Node) Concept. For Storage purpose using HDFS and Processing for MapReduce.

In MapR Architecture is Native approach it means that SAN, NAS or HDFS approaches to store the metadata. It will directly approach to SAN  no need to JVM. Sometimes Hypervisor, Virtual machines are crashed then data directly pushed into HardDisk it means that if a server goes down the entire cluster re-syncs the data node’s data. MapR has its own filesystem called MapR File System for storage purpose. For processing using MapReduce in background.

There is no Name node concept in MapR Architecture. It completely on CLDB ( Container Location Data Base). CLDB contains a lot of information about the cluster. CLDB  installed one or more nodes for high availability.

It is very useful for failover mechanism to recovery time in just a few seconds.

In Hadoop Architecture Cluster Size will mention for Master and Slave machine nodes but in MapR CLDB default size is 32GB in a cluster.


In Hadoop Architecture:



In MapR Architecture:

Container Location DataBase

Summary: The MapR Architecture is entirely on the same architecture of Apache Hadoop including all the core components distribution. In BigData environment have different types of distributions like Cloudera, Hortonworks. But coming to MapR is Enterprise edition. MapR is a stable distribution compare to remaining all. And provide default security for all services.

Hadoop Admin Vs Hadoop Developer

Basically in Hadoop environment Hadoop Admin and Hadoop Developer major roles according to present IT market survey Admin has more responsibilities and salaries compared to Hadoop developers. But we can differentiate below-mentioned points:

Hadoop Developer:

  1. In Big Data environment Hadoop is a major role, especially in Hadoop developers. A developer primarily responsible for Coding in Hadoop developer also the same kind of thing here developing like:

A)Apache Spark – Scala, Python, Java, etc.

B) Map Reduce – Java

C)Apache Hive  – HiveQL (Query Language & SQL)

D) Apache Pig  – Pig Scripting language etc.

2. Familiarity with ETL backgrounds for data loading and ingestion tools like:



3. Bit of knowledge on Hadoop admin part also like Linux environment and some of the basic commands while developing and executing.

4. Nowadays most preferably Spark & Hive developers with high-level experience and huge packages.

2.Hadoop Administration:

1. Coming to Hadoop Administration is a good and respectable job in the IT industry. Whereas, admin is responsible for performing the operational tasks to keep the infrastructure and running jobs.

2. Strong knowledge of the Linux environment. Setting up Cluster and Security authentication like Kerberos and testing the HDFS environment.

3. To provide new user access to Hive, Spark, etc. And cluster maintenance like adding (commissioning) node and removing (decommissioning) nodes. Resolve errors like memory issues, user access issues, etc.

4.Must and should knowledge on BigData platforms like:

A) Cloudera Manager

B) Horontworks Data Platform

C) MapR

D) Pseudo-distributed and Single node cluster setup etc.

5. Review and Managing log files and setting up of XML files.

6. As of now trending and career growth job.

7. Compared to Hadoop developers, Hadoop Admins are getting high salary packages in present marketing.

Summary: In the Bigdata environment Hadoop has valuable and trending jobs. And provide huge packages for both Hadoop developers and Hadoop administration. Depends upon skill set will prefer what we need for future growth.

BigData and Spark Multiple Choice Questions – I

1. In Spark, a —————– is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost.

A) Resilient Distributed Dataset (RDD)                  C)Driver

B)Spark Streaming                                                          D) Flat Map

Ans: Resilient Distributed Dataset (RDD)

2. Consider the following statement is the correct context of Apache Spark   :

Statement 1: Spark allows you to choose whether you want to persist Resilient Distributed Dataset (RDD) onto the disk or not.

Statement 2: Spark also gives you control over how you can partition your Resilient Distributed Datasets (RDDs).

A)Only statement 1 is true                 C)Both statements are true

B)Only statement 2 is true                  D)Both statements are false

Ans: Both statements are true

3) Given the following definition about the join transformation in Apache Spark:

def : join [W] (other: RDD[(K, W)]) : RDD [(K, (V, W))]

Where join operation is used for joining two datasets. When it is called on datasets of type (K, V) and (K, W), it returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key.

Output the result of joinrdd, when the following code is run.

val rdd1 = sc.parallelize (Seq ((“m”,55), (“m”,56), (“e”,57), (“e”,58), (“s”,59),(“s”,54)))
val rdd2 = sc.parallelize (Seq ((“m”,60),(“m”,65),(“s”,61),(“s”,62),(“h”,63),(“h”,64)))
val joinrdd = rdd1.join(rdd2)
A) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (h,(63,64)), (s,(54,61)), (s,(54,62)))
B) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (e,(57,58)), (s,(54,61)), (s,(54,62)))
C) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (s,(54,61)), (s,(54,62)))
D)None of the mentioned.

Ans: Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (s,(54,61)), (s,(54,62)))

4)Consider the following statements are correct:

Statement 1: Scale up means incrementally grow your cluster capacity by adding more COTS machines (Components Off the Shelf)

Statement 2: Scale out means grow your cluster capacity by replacing with more powerful machines

A) Only statement 1 is true               C) Both statements are true

B) Only statement 2 is true              D) Both statements are false

Ans: Both statements are true

Complete mapR Installation on Linux machine

After completion of Prerequisite set up will go through directly with MapR actual steps for Installation on Linux machine.

Actual steps for MapR installation:

Step 1:  fdisk -l

Powerful and popular command it is used for the list of disk partition tables.

Step 2: cat /etc/yum.repos.d/mapr_ecosystem.repo

Install/Update mapr eco system repo files

Step 3:  cat /etc/yum.repos.d/mapr_installer.repo

Install/Update mapr installer repo  files

Step 4:  cat /etc/yum

configuring yum repos

Step 5:cat /etc/yum.repos.d/mapr_core.repo

Install/Update mapr repo repo files

Step 6: yum clean all

Yum un necessary repos cleaned

Step 7: yum update

Yum update

Step 8: yum list | grep mapr

Check yum list files in mapr by using grep command

Step 9: rpm –import http://package.mapr.com/releases/pub/maprgpg.key

Import mapr public key

Step 10: yum install mapr-cldb mapr-fileserver mapr-webserver mapr-resourcemanager mapr-nodemanager mapr-nfs mapr-gateway mapr-historyserver

Install mapr CLDB file server, Web server, Resource manager, node manager, nfs ,gateway and History server by using above single command.

Step 11: yum install mapr-zookeeper

Install MapR Zookeeper for configuration

Step 12:  ls -l /opt/mapr/roles

Check mapr roles

Step  13: rpm -qa | grep mapr

Step 14: id mapr

ID creation of mapr user

Step 15: hostname -i

Check Fully Qualified Domain Name

Step 16: /opt/mapr/server/configure.sh -N training -C -Z

Configure server with your ip

Step 17: cat /root/maprdisk.txt

Check disk files
Step 18: /opt/mapr/server/disksetup -F /root/maprdisk.txt

Disk setup in mapr disk.
Step 19: service mapr-zookeeper start

Start the MapR Zookeeper service

Step 20: service mapr-zookeeper status

Status of the MapR Zookeeper service

Step 21: service mapr-warden start

Start the MapR Warden service

Step 22: service mapr-warden status

Status of the MapR Warden service

Step 23: maprcli node cldbmaster

Step 24: maprcli license showid

Show your mapr license id

Step 25: https://<ipaddress>:8443

Open a web browser with your < IP address : 8443 > then will check it working or not

Step 26: hadoop fs -ls /

Check hadoop file list

Summary: Above steps are worked for Linux single node cluster for complete MapR Installation with the explanation each and every command.

Hadoop and Spark Interview Questions

Cognization conducted Hadoop and Spark interview question for experienced persons.

Round 1:

1. What is the future class in Scala programming language?

2.Difference between fold by fold Left or foldRight-in Scala?

3. How to distribute by will work in hive give some data tell me how to data will be distributed

4.dF.filter(Id == 3000) how to pass this condition in data frame on values in dynamically?

5. Have you worked on multithreading in Scala and explain?

7.On what basis you will increase the mappers in Apache Sqoop?

8. What will you mention last value while you are importing for the first time in Sqoop?

9. How do you mention date for incremental last modified in Spark?

10. Let’s say you have created the partition for Bengaluru but you loaded Hyderabad data what is the validation we have to do in this case to make sure that there won’t be any errors?

11. How many reducers will be launched in distributed by in Spark?

12. How to delete sqoop job in simple command?

13.In which location sqoop job last value will be stored?

14. What are the default input and output formats in Hive?

15. Can you explain brief idea about distributing cache in Spark with an example?

16. Did you use Kafka/Flume in your project and explain in detail?

17.Difference between Parquet and ORC file formats?

Round 2:

1. Explain your previous project?

2. How do you handle incremental data in apache sqoop?

3. Which Optimization techniques are used in sqoop?

4. What are the different parameters you pass your spark job?

5. In case one task is taking more time how will you handle?

6. What is stages and task in spark and give a real-time scenario?

7.On what basis you set mappers in Sqoop?

8. How will you export the data to Oracle without putting much load in the table?

9. What is column family in Hbase?

10. Can you create a table without mentioning column family

11.The number of column families limits for one table?

12. How to schedule Spark jobs in your previous project?

13. Explain Spark architecture with a real-time based scenario?

MapR Installation steps on AWS

MapR Installation on Amazon Web Service Machine with simple steps for Hadoop environment.

Step 1: Login with AWS credentials and then open the root machine.

[ec2-user@ip----~]$ sudo su -

Step 2: Put off the IP tables  services

[root@ip---- ~]# service iptables stop

Step 3: Check the configuration of iptables

[root@ip----- ~]# chkconfig iptables off

Step 4: Edit the SELinux configuration

[root@ip----~]# vim /etc/selinux/config

Step 5: EDIT replace enforcing with disabled (save and exit)

[root@ip----~]# SELINUX = disabled

Step 6: Open repos by using below command

[root@ip----~]# cd /etc/yum.repos.d/

Step 7: edit mar ecosystem repo file.

[root@ip----yum.repos.d]# vi mapr_ecosystem.repo

Put the following lines into the above file

name = MapR Ecosystem Components
baseurl = http://package.mapr.com/releases/MEP/MEP-3.0.4/redhat
gpgcheck = 0
enabled = 1
protected = 1

Step 8: edit mapr installer repo files.

[root@ip----yum.repos.d]# vi mapr_installer.repo

Step 9: Edit mapr core repo files.

[root@ip----yum.repos.d]# vi mapr_core.repo

Put the following lines into the above file

name = MapR Core Components
baseurl = http://archive.mapr.com/releases/v5.0.0/redhat/
gpgcheck = 1
enabled = 1
protected = 1

Step 10: create yum repolist

[root@ip----- yum.repos.d]# yum repolist

(here you will seen all packages)
Step 11: Search mapr package files.

[root@ip------ yum.repos.d]# yum list all | grep mapr

(this displays all packages related to mapr)

Step 12: import rpm package files

[root@ip----- yum.repos.d]# rpm --import


Step 13:  install mapr cldb file server,webserver,resource manager and node manager

[root@ip------ yum.repos.d]# yum install mapr-cldb mapr-fileserver mapr-

webserver mapr-resourcemanager mapr-nodemanager

Step 14: Install mapr Zookeeper

[root@ip------ yum.repos.d]# yum install mapr-zookeeper

Step 15: list of mapr files

[root@ip----- yum.repos.d]# ls -l /opt/mapr/roles/

Step 16: search for mapr rpm files by using files grep command.

[root@ip------ yum.repos.d]# rpm -qa | grep mapr

(displays installed packages related to mapr)

Step 17: Adding Group for mapr system

[root@ip------ yum.repos.d]# groupadd -g 5000 mapr

Step 18: Adding a user for mapr group system

[root@ip------ yum.repos.d]# useradd -g 5000 -u 5000 mapr

Step 19 : Set passwd for mapr user

[root@ip------ yum.repos.d]#passwd mapr

(here you will give password for mapr user)
(you can give any name)

Step 20: create id mapr

[root@ip------ yum.repos.d]# id mapr

Step 21: check Fully Qualified Doman Name using below command

[root@ip------ yum.repos.d]# hostname -f

Step 22: check disk availability

[root@ip------ yum.repos.d]# fdisk -l

(here you have seen available disks in that machine and select the second disk for mapr)

Step 23: Edit second disk information for maprdisk file system.

[root@ip----- yum.repos.d]# vi /root/maprdisk.txt

(here that second disk put here)(save and exit)

Step 24: Set the configuration server in different zones.

[root@ip----- yum.repos.d]# /opt/mapr/server/configure.sh -N training -C ip--------.ap-southeast-1.compute.internal -Z ip------.ap-southeast-1.compute.internal:5181

Step 25: Edit second disk files

[root@ip------ yum.repos.d]# cat /root/maprdisk.txt

Step 26: Download the rpm files

[root@ip------ ~]# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Step 27: Extra package for enterprise linux system

[root@ip------ ~]# rpm -Uvh epel-release-6*.rpm

Step 28: Start Zookeeper services

[root@ip------ ~]# service mapr-zookeeper start

Step 29 :Start warden services

[root@ip-1----- ~]# service mapr-warden start

Step 30: Start MapR CLI NODE CLDB MASTER service

[root@ip----- ~]# maprcli node cldbmaster

Here you will go with your machine ip in web server for mcs..shown below..

Deloitte Hadoop and Spark Interview Questions

Round 1:

1. Explain about your previous Project?

2. Write the Apache Sqoop code that you are using in your previous project?

3. What is the reason for moving data from DBMS to Hadoop Environment?

4. What happens when you increase mappers in MapReduce?

5. What is the command to check the last value of Apache Sqoop job?

6. Can you explain Distributed Cache?

7. Explain about Hive optimization techniques in your project?

8. Which Hive analytic functions you used in the project?

9. How to update records in Hive table in a single command?

10. How to limit the records when you are consuming the data in Hive table?

11. How to change the Hive engine to Apache Spark engine?

12.Difference between Parquet and ORC file format?

13. How to handle huge data flow situation in your project?

14. Explain about Apache Kafka with architecture?

15. Which tool will create partitions in the Apache Kafka topic?

16. Which transformation and actions are used in your project?

17. Explain a brief idea about Spark Architecture?

18. How will check if data is there or not in the 6th partition in RDD?

19. How do you debug in Spark code in Regex?

20. Give me the idea about a functional programming language?

21.Difference between Map Vs Flat Map in Spark?

22. For example, Spark word count while splitting which one do you use? what happens if you use map instead of flatMap in that program?

23. If you have knowledge on Hadoop Cluster then will you explain about capacity planning for four node cluster?


1. Define YARN and MapReduce Architecture?

2. Explain Zookeeper functionalities and give how the flow when the node is down?

3. Explain Data modeling in your project?

4. In your project, reporting tools are used? if you yes then explain it?

5. Give me a brief idea about Broadcast variables in Apache Spark?

6. Can you explain about Agile methodology and give me architecture of Agile?

Prerequisites for MapR Installation on CentOS

In Hadoop Eco-System we preferable mostly three Big data distributions:

1.Cloudera Distribution Hadoop

2.Horton Works Data Platform

3.MapR Distributions Platform

In Cloudera, Distribution Platform is a free version, express, and enterprise edition up to 60 days trial version.

Coming to Hortonworks Data Platform completely open source platform for production, developing and testing environment.

Then finally MapR distribution platform is a complete enterprise edition but in MapR 3 is free version is available with fewer features to compare to MapR 5 and MapR 7.

How to install MapR free version on Pseduo Cluster:

Before the install of MapR, we configured prerequisites as  below:


1.Configure hostname like FQDN by using the setup command (mapr.hadoop.com) after that check your hostname using hostname -f

2. vi/etc/hosts

3.hostname < your Fully Qualified Domain>

4. vim/etc/selinux/config ===> SELinux = disabled

——-Disable Firewalls and IPTables——-

If you enable firewalls and iptables doesn’t allow some ports so we must and should disable it.

1.service iptables save

2.service iptables stop

3.chkconfig iptables off

4.service ip6table save

5.service ip6tables stop

6.chkconfig ip6tables off

—– Enable NTP service for machines —–

NTP is a Network Time Protocol is a networking protocol for time synchronization between computers and packet switched data.

1.yum -y install ntp ntpupdate ntp-doc

2.chkconfig ntpd on

3.vi /etc/ntp.conf

4.server 0.rhel.pool.ntp.org

5.server 1.rhel.pool.ntp.org

6.server 2.rhel.pool.ntp.org

7.ntpq -p

8.date ( All machines have the same date otherwise it will showing error)

—— Install some additional packages in Linux OS —-

Here will install JAVA 1.8 and Python

1.yum -y install java-1.8.0 -openjdk-devel

2.yum -y install python perl expect expectk

—- setup passwordless SSH On all nodes form master node ——

For passwordless authentication in between master and slave nodes

1.ssh-keygen -t rsa

2.cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

3.ssh-copy-id root@<FQDN1, FQDN2>

—–Additional Linux configuration or Transparent Huge Pages(THP)—-

1. echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled

2.echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

3.sysctl vm.swapiness=10

set up EPEL repository for installing additional packages on the system

Here  EPEL repository for installing the additional packages in centos machine

1.Install -uvh the EPEL repository

2.wget http://http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release -6.8.norach.rpm

HBase Table(Single&Multiple) data migration from one cluster to another cluster

HBase single table migration from one cluster to another cluster:

Here will be shown about Hbase single data table migration existing cluster to a new cluster simple steps:

Step 1: First export the hbase table data into the local hdfs path (Hadoop Distributed File System)

Step 2: After that copy the HBase table data from the source cluster to destination cluster by using the distcp command. (mostly distcp is a copy command for one cluster data to another cluster)

Step 3: Then create an Hbase table in the destination cluster (target cluster)

Step 4: After that import the Hbase table data from local to HBase table in the destination cluster.

Source Cluster:

1.  hbase.org.apache.hadoop.hbase.mapreduce.Driver export <hbase _table _name >  < source _hdfs _path >

2. hbase distcp hdfs :// <source_cluster_ipaddress:8020> to </source _hdfs _path>

3.hdfs: // < destination_cluster_ipaddress: 8020 > to <destination _hdfs _path>

Destination Cluster:

1.hbase org.hadoop.hbase.mapreduce.import < hbase _ table_ name > to < hbase _table _hdfs _path >

HBase multiple table migration from one cluster to another cluster:

We know how to Hbase single table migration then coming to multiple table migration from one cluster to another cluster in simple manner by below steps.

We have script files then simply multiple Hbase data migrations happening to go through below steps:

Step 1: First step place the hbase-export.sh and hbase-table.txt in the source cluster

Step 2: After that place the hbase -import.sh and hbase-table.txt in the destination cluster.

Step 3: Mention all the table list in the hbase-table.txt file

Step 4: Create all the HBase table on the destination cluster

Step 5: Execute the hbase-export-generic.sh in the source cluster

Step 6: Execute the hbase-import.sh in the destination cluster.
Summary: I tried in Cloudera Distribute Hadoop environment for Hbase data migration from one cluster to another cluster. For Hbase single table data and multiple table data migration in very simple for Hadoop administrator as well as Hadoop developers. It is the same as Hortonword Distribution also.

Replication Factor in Hadoop

How to Replication Factor comes into the picture:

The Backup mechanism in the traditional distribution system:

In Hadoop, Backup mechanism didn’t provide high availability. This system is followed by shaded architecture.

The first request from File to Master node then divided into blocksize. It is a continuous process but node 1(slave1) is failed to another node(Slave 2).

Replication Factor:

Replication factor is the process of duplicating the data on the different slave machines to achieve high availability processing.

Replication is a Backup mechanism or Failover mechanism or Fault tolerant mechanism.

In Hadoop, Replication factor default is 3 times. No need to configure.

Hadoop 1.x :
Replication Factor is 3
Hadoop 2.x:
Replication Factor is also 3.

In Hadoop, Minimum Replication factor is 1 time. It is possible for a single node Hadoop cluster.

In Hadoop, Maximum Replication factor is 512 times.

If 3 minimum replication factor then minimum 3 slave nodes are required.

If the replication factor is 10 then we need 10 slave nodes are required.

Here is simple for the replication factor:

'N' Replication Factor = 'N' Slave Nodes

Note: If the configured replication factor is 3 times but using 2 slave machines than actual replication factor is also 2 times.

How to configure Replication in Hadoop?

It is configured in the  hdfs-site.xml file.

<name> dfs.replication</name>
<value> 5 </value>

Design Rules Of Replication In Hadoop:

1. In Hadoop Replication is only applicable to Hadoop Distributed File System (HDFS) but not for Metadata.

2. Keep One Replication per slave node as per design.

3. Replication will only happen on Hadoop slave nodes alone but not on Hadoop Master node (because the master node is only for metadata management on its own. It will not maintain the data).

Storage only duplicates in Hadoop but not processing because processing us always unique.

Summary: In Hadoop, Replication factor is a major role for data backup mechanism in earlier days. Default replication factor always 3 except single node cluster environment.