Most frequently asked HBase Interview Questions and Answers





1. When should you HBase and What are the Key components of HBase in Hadoop eco-system?

In Hadoop eco-system, HBase should be used in the big data application has a variable schema in data is stored in the form of collections the applications should be demand key-based access and retrieving data. Region Server is monitors the Region and HBase Master is responsible for monitoring the region server simply.
Zookeeper takes care of the coordination and configuration between the HBase Master component and the client. Catalog Tables are two catalog tables is ROOT and META.ROOT.

2. What are the different operational commands in HBase at a record level and table level?
One is Record level  – put, get, increment, scan and delete.
The second one is Table level – describing, list, drop disable and scan.

3. Explain the difference between RDBMS data model and HBase data mode in Big Data environment?

A. In Big Data environment RDBMS is a schema-based database model
B.HBase is a schemaless database model
C.RDBMS doesn’t have support for in-built partitioning in Data modeling
D.HBase there is automated partitioning in Data modeling




4. What is the difference between HBase and Hive in Hadoop?

HBase and Hive both are different Hadoop based technologies. Whereas Hive is Data summarization on top of Hadoop. HBase is a NoSQL key-value store that runs on top Hadoop

HBase supports 4 primary operations like put, get, scan and delete. whereas Hive helps for SQL to run MapReduce job.

5. What are different types of tombstone markers in HBase for deletion?
In HBase, three types of tombstone markers are there for deletion

A. Family Delete Marker B. Version Delete Marker C. Column Delete Marker.
6. Explain the process of row deletion in HBase on top of Hadoop?

In HBase, the deleted command is not actually deleted from the cells but rather the cells are made invisible by setting up a tombstone marker.

Most frequently Asked Hadoop Admin interview Questions for Experienced

Hadoop Admin interview Questions for Experienced

1.Difference between Missing and Corrupt blocks in Hadoop 2.0 and how to handle it?

Missing block: Missing block means that there are blocks with no replicas anywhere in the cluster.

Corrupt block: It means that HDFS cannot find any replica containing data and replicas are all corrupted.




How to Handle :
By using  below command will handle

To find out which file is corrupted and remove a file

A) hdfs fsck /
B)hdfs fsck / | grep -v '^\.+$' | grep -v eplica
C) hdfs fsck /path/to/corrupt/file -location -block -files
D)hdfs fs -rm /path/to/file/

2. What is the reason behind of an odd number of zookeepers count?

Because Zookeeper elects a master based opinion of more than half of nodes from the cluster. If even number of zookeepers is there difficult to elects master so zookeepers count should be an odd number.

3. Why Kafka is required for zookeeper?

Apache Kafka uses zookeeper, need to first start zookeeper server. Zookeeper elects the controller topic configuration.

4. What is the retention period of Kafka logs?

When a message sent to Kafka cluster appended to the end of logs. The message remains on the topic for a configurable period of time. In this period of time Kafka generates a log file, it called retention period of Kafka log.
It defines log.retention.hours 

5. What is block size in your cluster, why not recommended for 54 MB block?

Depends upon your cluster, because of Hadoop standard is 64 MB

6. For suppose if the file is 270 MB then block size is 128 MB on your cluster so how many blocks if 3 blocks are 128+!28+14MB so 3rd block 14MB is wasted or other data can be appended?

7. What are the FS image and Edit logs?

FS image: In a Hadoop cluster the entire file system namespace, file system properties and block of files are stored into one image it is called an FS image (File System image). And total information in Editlogs.

8. What is your action plan if your PostgreSQL or MySQL down on your cluster?

First, check with the log file, then go with what is an error and find out the solution

For example: If connectionBad Postgres SQL
Solution: First status Postgres SQL service

sudo systemctl status postgressql

Then stop the Postgres SQL service

sudo systemctl stop postgressql

Then provide pg_ctlcluster with the right user and permissions

sudo systemctl enable postgressql

9. If both name nodes are in stand by name node, then if jobs are running or failed?

10. What is the Ambari port number?

By Default Ambari port number is 8080 for access to Ambari web and the REST API.




11. Is your Kerberos cluster which one using LDAP or Active Directory?

Depends upon your project if LDAP integration or Active Directory and explain it.

Latest: Hadoop Admin Interview Questions for 3 to 15 years Experience

Nowadays, emerging one of the skill is Hadoop administration. Below questions is the middle-level interview type questions:





1. Explain your projects according to your resume and using different types of distributions?

2. Explain about High Availability in Name node?

3. Explain about Kerberos, Ranger, Knox with scenario based?

4. Asking about any Scripting language like Python, Shell scripting?

5.Difference between Namnode and CLDB(Container Location DataBase in MapR)

6. How many Zookeepers are used in your project? Why it is odd one only can you please explain?

7. How to resolve Herat beat issue and explain the processes for resolve?

8. Recently resolved an issue from Cluster like Hive, HBase Master and how to resolve them?

9. Difference between Cloudera, MapR, and Hortonworks with examples?

10. Why Secondary Namenode concept picture in the Hadoop? and explain?

11. Explain step by step processing of  Hortworks Installation? No need to explain about prerequisites?

Latest PLSQL(Manadatory)Interview Questions for Freshers/Experience

Latest PLSQL interview Questions in Technical round for All.



PLSQL Interview Questions for Freshers/Experience:

1. How can you reduce query execution time in SQL Tunning

2.The major difference between SP and Triggers?

3. Explain about Analytical functions PL/SQL

4. Which one is execution faster? Truncate or Delete?

5. Coming to Anchored declaration explain about %Type and %rowType?

6. Explain about Cursor different attributes?

7. Write a query to select duplicate values from a table?

8.Difference between Rownum and Rowid with example?

9. Can you explain the difference between the implicit cursor and explicit cursor?
10. Write a query to get first 10 records from a table

11. Explain about Table Type variables?

12. Can you explain the difference between Analytical and aggregate functions?

 

Javascript interview questions

Some of the javascript interview questions and topics for UI Developers.



Javascript Interview questions

  1. What is Javascript?
  2. Ways to Handle Javascript Event handlers?
  3. Hoisting Variables and function
  4. What is event Bubbling and capturing?
  5. Javascript pass by value or pass by reference
  6. Javascript functions 
  7. Variables scope
  8. Dom selectors
  9. Ways to create object and arrays
  10. Ways to loop in javascript
  11. Javascript types
  12. what is hapOwnProperty? when it’s useful?
  13. String, Array, Number methods
  14. What is a closure?
  15. How to achieve inheritance?
  16. Javascript encapsulation
  17. Explain call, apply, bind
  18. Javascript Event loop
  19. Callback functions
  20. Javascript promises
  21. Configuring objects
  22. Javascript design patterns
  23. Polyfills
  24. Associative arrays
  25. Freeze, Seal
  26. ES5 Features
  27. ES6 Features
  28. ES7 Features
  29. When to use map, reduce and filter methods?
  30. Explain javascript ‘this’

Kafka Scenarios with Questions




1. Kafka – Scenarios :

I) Suppose one producer is producing more than one consumer  can consume. How will you deal such situation and What are your preventive measures to stop data loss?

II) Suppose Consumer X has read 50 offsets from the topics and it got failed then how consumer  Y will pick up offsets and how does it stores the data and what is the mechanism we need configure to achieve this.

III) Suppose producer is writing the data in CSV format and in structure data then how will the consumer will come to know what is schema the data is coming in and how to specify and where to specify the schema?

Rare questions on Apache Kafka

1.  Rebalancing in Apache Kafka and what way it is useful?

2. How do you manage Offsets in Apache Kafka?

3. Is it possible to run Kafka without Zookeeper?

4.If the server fails in Kafka then how to handle load balancing?