7 Best Programming Languages for Mobile Apps

The Best Programming languages for Developing Mobile Applications:




Nowadays, mobile applications are the most necessary part of our lifestyle. Smartphones with applications are essential and mandatory for information and needy in real life with a huge demand for mobile applications to provide work opportunity for any programmer to divide into mobile application developing. We will be mentioning below some of the programming languages are mandatory to learn and earn money from mobile apps.

Majorly two types of operating system in a smartphone.

One is Android and another one is iOS.

Below programming languages are the best languages for developing mobile applications both Android and iOS.

1. Java

Java is one of the best programming languages for developing mobile applications. Simple to learn and develop mobile apps bit of easy. It is an official language for developing Andriod apps but still huge demand in the present market. Android and Eclipse is major role while developing mobile apps in Java with OOPS concept. Java has a different type of frameworks to developing and it is emerging technology in the market.

2.Python:




Python is a most trending programming language and fastest growing in the present market for all applications not only mobile application. Python is very simple to learn and develop the application. It has an interactive language for supporting the OOPS concept.

3.JavaScript:

Java and JavaScript are also different programming languages while developing mobile applications. It is high demand programming language nowadays. Purely scripting language for developing mobile applications using NodeJS especially for apps.

4. Swift:

Swift is a programming language for developing a mobile application developed by Apple in 2014. It is used for only iOS and MacOS based on Objective C for developing.

5.C++:





C++ is a programming language that is object-oriented based on C programming language. It is used for mobile application in a simple manner. Nowadays Amazon, Google, and Microsoft used for mobile application tools.

6.KOTLIN

Kotlin is one of the most trending programming languages for developing a mobile application. Founded by JetBrains for android developing. It is also from the Android studio. Kotlin is simple to learn and develop mobile apps.

7.Objective – C:

Objective – C was a very popular programming language among Apple developers before swift came into the market. Many developers are still using Objective C for iOS development.



The Best Companies to work for India in 2019 – LinkedIn




According to a recent LinkedIn survey, which ranked the top 25 companies as per employee facilities, work and enjoyability in IT.

The Top 25 companies list is based on LinkedIn feedback more than 54 million users in India. This survey most favorable to the employee job demand and employee retention.

The eligibility criteria must be above 500 employees at the end of the year (Financial year) upon LinkedIn data.

Here are the Top 25 best companies to work for India.

1. Flipkart(Walmart)  – Internet

Offices: Delhi and Bengaluru

India’s topmost e-commerce platform  Flipkart was founded in 2007 by Sachin Bansal and Binny Bansal. In 2018 Walmart to purchase a controlling stake in the business.

2.Amazon  – Internet

Offices: Delhi, Bengaluru, Hyderabad, Chennai, and Mumbai

Amazon founded by Jeff Bezos in 1994 and launched in India in 2013. Amazon is one of the topmost e-commerce platforms in the world.

3.OYO – Hospitality

Offices: Delhi, Bengaluru, Hyderabad, Chennai, Mumbai, and Vizag

Topmost India’s hospitality market OYO founded by Ritesh Agarwal in 2012.

4.One97Communications(Paytm) – Internet

Offices: Delhi, Bengaluru, Hyderabad, Chennai, Mumbai 

Paytm is an Indian e-commerce payment system. Founded by Vijay Shekar Sharma.

5.Uber  – Internet

Offices: Delhi, Bengaluru, Hyderabad, Chennai, and Mumbai.

Uber is the most transportation network company for services like peer to peer sharing system.

6.Swiggy -Internet

7.TCS – IT  and ITIS

8.Zomato – Customer Services

9.Alphabet – Internet



10.Reliance Industries – Oil and Energy

11.EY – Accounting

12.Adobe – Computer Software

13.Boston Consulting Group – Management Consulting

14.Yes Bank – Banking

15.IBM – IT and ITIS

16.Daimler AG – Automotive

17.Freshworks – IT and ITIS

18.Accenture – IT and ITIS

19.Ola – Internet

20.ICICI Bank – Banking

21.PWC India – Management Consulting

22.KPMG India – Management Consulting

23.Larsen & Toubro – Construction

24.Oracle – IT and ITIS

25.Qualcomm – Wireless



Summary: Above companies are based on the LinkedIn latest survey in India upon huge data having employees facilities, salaries and. Here are mostly IT and ITIS companies, Internet, e-commerce and Customer services are most favorable.

Hadoop Admin Vs Hadoop Developer




Basically in Hadoop environment Hadoop Admin and Hadoop Developer major roles according to present IT market survey Admin has more responsibilities and salaries compared to Hadoop developers. But we can differentiate below-mentioned points:

Hadoop Developer:

  1. In Big Data environment Hadoop is a major role, especially in Hadoop developers. A developer primarily responsible for Coding in Hadoop developer also the same kind of thing here developing like:

A)Apache Spark – Scala, Python, Java, etc.

B) Map Reduce – Java

C)Apache Hive  – HiveQL (Query Language & SQL)

D) Apache Pig  – Pig Scripting language etc.

2. Familiarity with ETL backgrounds for data loading and ingestion tools like:

A)Flume

B)Sqoop

3. Bit of knowledge on Hadoop admin part also like Linux environment and some of the basic commands while developing and executing.

4. Nowadays most preferably Spark & Hive developers with high-level experience and huge packages.

2.Hadoop Administration:

1. Coming to Hadoop Administration is a good and respectable job in the IT industry. Whereas, admin is responsible for performing the operational tasks to keep the infrastructure and running jobs.

2. Strong knowledge of the Linux environment. Setting up Cluster and Security authentication like Kerberos and testing the HDFS environment.

3. To provide new user access to Hive, Spark, etc. And cluster maintenance like adding (commissioning) node and removing (decommissioning) nodes. Resolve errors like memory issues, user access issues, etc.

4.Must and should knowledge on BigData platforms like:

A) Cloudera Manager

B) Horontworks Data Platform

C) MapR



D) Pseudo-distributed and Single node cluster setup etc.

5. Review and Managing log files and setting up of XML files.

6. As of now trending and career growth job.

7. Compared to Hadoop developers, Hadoop Admins are getting high salary packages in present marketing.



Summary: In the Bigdata environment Hadoop has valuable and trending jobs. And provide huge packages for both Hadoop developers and Hadoop administration. Depends upon skill set will prefer what we need for future growth.

Big Data Spark Multiple Choice Questions



Spark Multiple Choice Questions and Answers:

1)Point out the incorrect  statement in the context of Cassandra:

A) Cassandra is a centralized key -value store

B) Cassandra is originally designed at Facebook

C) Cassandra is designed to handle a large amount of data across many commodity servers, providing high availability with no single point if failure.

D) Cassandra uses a right based DHT*Distribution Hash Table) but without finger tables or routing

Ans : D

2. Which of the following are the simplest NoSQL databases in BigData environment?

A) Document                                    B) Key-Value Pair

C) Wide – Column                        D) All of the above mentioned 

Ans : ) All of the above mentioned

3) Which of the following is not a NoSQL database?

A) Cassandra                          B) MongoDB

C) SQL Server                           D) HBase

Ans: SQL Server


4) Which of the following is a distributed graph processing framework on top of Spark?

A) Spark Streaming                   B)MLlib

C)GraphX                                          D) All of the above

Ans: GraphX

5) Which of the following is leverage of Spark core fast scheduling capability to perform streaming analytics?

A) Spark Streaming                     B) MLlib

C)GraphX                                       D) RDDs

Ans: Spark Streaming

6) Which of the following Machine Learning API for Spark based on Which one:

A) RDD                                 B) Dataset

C)DataFrame          D) All of the above

Ans: DataFrame

7) Based on which functional programming language construct for Spark optimizer

A) Python                         B) R

C) Java                                   D)Scala

Ans: Scala is a functional programming language

8) Which of the following is a basic abstraction of Spark Streaming?

A)Shared variable                 B)RDD

C)Dstream                                  D)All of the above

Ans: Dstream

9) In a which cluster manager to do support of Spark?

A) MESOS                                B)YARN

C) Standalone Cluster manager   D) Pseudo Cluster manager

E) All of the above

Ans: All of the above

10) Which of the following is the reason for Spark being faster than MapReduce while execution time?

A) It supports different programming languages like Scala, Python, R, and Java.

B)RDDs

C)DAG execution engine and in-memory computation (RAM based)

D) All of the above

Ans: DAG execution engine and in-memory computation (RAM based)


BigData and Spark Multiple Choice Questions – I




1. In Spark, a —————– is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost.

A) Resilient Distributed Dataset (RDD)                  C)Driver

B)Spark Streaming                                                          D) Flat Map

Ans: Resilient Distributed Dataset (RDD)

2. Consider the following statement is the correct context of Apache Spark   :

Statement 1: Spark allows you to choose whether you want to persist Resilient Distributed Dataset (RDD) onto the disk or not.

Statement 2: Spark also gives you control over how you can partition your Resilient Distributed Datasets (RDDs).

A)Only statement 1 is true                 C)Both statements are true

B)Only statement 2 is true                  D)Both statements are false

Ans: Both statements are true




3) Given the following definition about the join transformation in Apache Spark:

def : join [W] (other: RDD[(K, W)]) : RDD [(K, (V, W))]

Where join operation is used for joining two datasets. When it is called on datasets of type (K, V) and (K, W), it returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key.

Output the result of joinrdd, when the following code is run.

val rdd1 = sc.parallelize (Seq ((“m”,55), (“m”,56), (“e”,57), (“e”,58), (“s”,59),(“s”,54)))
val rdd2 = sc.parallelize (Seq ((“m”,60),(“m”,65),(“s”,61),(“s”,62),(“h”,63),(“h”,64)))
val joinrdd = rdd1.join(rdd2)
joinrdd.collect
A) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (h,(63,64)), (s,(54,61)), (s,(54,62)))
B) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (e,(57,58)), (s,(54,61)), (s,(54,62)))
C) Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (s,(54,61)), (s,(54,62)))
D)None of the mentioned.

Ans: Array[(String, (Int, Int))] = Array((m,(55,60)), (m,(55,65)), (m,(56,60)), (m,(56,65)), (s,(59,61)), (s,(59,62)), (s,(54,61)), (s,(54,62)))

4)Consider the following statements are correct:

Statement 1: Scale up means incrementally grow your cluster capacity by adding more COTS machines (Components Off the Shelf)

Statement 2: Scale out means grow your cluster capacity by replacing with more powerful machines

A) Only statement 1 is true               C) Both statements are true

B) Only statement 2 is true              D) Both statements are false

Ans: Both statements are true



Complete mapR Installation on Linux machine




After completion of Prerequisite set up will go through directly with MapR actual steps for Installation on Linux machine.

Actual steps for MapR installation:

Step 1:  fdisk -l

Powerful and popular command it is used for the list of disk partition tables.

Step 2: cat /etc/yum.repos.d/mapr_ecosystem.repo

Install/Update mapr eco system repo files

Step 3:  cat /etc/yum.repos.d/mapr_installer.repo

Install/Update mapr installer repo  files

Step 4:  cat /etc/yum

configuring yum repos

Step 5:cat /etc/yum.repos.d/mapr_core.repo

Install/Update mapr repo repo files

Step 6: yum clean all

Yum un necessary repos cleaned

Step 7: yum update

Yum update

Step 8: yum list | grep mapr

Check yum list files in mapr by using grep command

Step 9: rpm –import http://package.mapr.com/releases/pub/maprgpg.key

Import mapr public key

Step 10: yum install mapr-cldb mapr-fileserver mapr-webserver mapr-resourcemanager mapr-nodemanager mapr-nfs mapr-gateway mapr-historyserver

Install mapr CLDB file server, Web server, Resource manager, node manager, nfs ,gateway and History server by using above single command.

Step 11: yum install mapr-zookeeper

Install MapR Zookeeper for configuration

Step 12:  ls -l /opt/mapr/roles

Check mapr roles

Step  13: rpm -qa | grep mapr

Step 14: id mapr

ID creation of mapr user

Step 15: hostname -i

Check Fully Qualified Domain Name

Step 16: /opt/mapr/server/configure.sh -N training -C 192.0.0.0 -Z  192.0.0.0:5181

Configure server with your ip

Step 17: cat /root/maprdisk.txt



Check disk files
Step 18: /opt/mapr/server/disksetup -F /root/maprdisk.txt

Disk setup in mapr disk.
Step 19: service mapr-zookeeper start

Start the MapR Zookeeper service

Step 20: service mapr-zookeeper status

Status of the MapR Zookeeper service

Step 21: service mapr-warden start

Start the MapR Warden service

Step 22: service mapr-warden status

Status of the MapR Warden service

Step 23: maprcli node cldbmaster

Step 24: maprcli license showid

Show your mapr license id

Step 25: https://<ipaddress>:8443

Open a web browser with your < IP address : 8443 > then will check it working or not

Step 26: hadoop fs -ls /

Check hadoop file list


Summary: Above steps are worked for Linux single node cluster for complete MapR Installation with the explanation each and every command.

CTS  Hadoop and Spark Interview Questions




Cognization conducted Hadoop and Spark interview question for experienced persons.

Round 1:

1. What is the future class in Scala programming language?

2.Difference between fold by fold Left or foldRight-in Scala?

3. How to distribute by will work in hive give some data tell me how to data will be distributed

4.dF.filter(Id == 3000) how to pass this condition in data frame on values in dynamically?

5. Have you worked on multithreading in Scala and explain?

7.On what basis you will increase the mappers in Apache Sqoop?

8. What will you mention last value while you are importing for the first time in Sqoop?

9. How do you mention date for incremental last modified in Spark?

10. Let’s say you have created the partition for Bengaluru but you loaded Hyderabad data what is the validation we have to do in this case to make sure that there won’t be any errors?

11. How many reducers will be launched in distributed by in Spark?

12. How to delete sqoop job in simple command?

13.In which location sqoop job last value will be stored?

14. What are the default input and output formats in Hive?

15. Can you explain brief idea about distributing cache in Spark with an example?

16. Did you use Kafka/Flume in your project and explain in detail?


17.Difference between Parquet and ORC file formats?

Round 2:

1. Explain your previous project?

2. How do you handle incremental data in apache sqoop?

3. Which Optimization techniques are used in sqoop?

4. What are the different parameters you pass your spark job?

5. In case one task is taking more time how will you handle?

6. What is stages and task in spark and give a real-time scenario?

7.On what basis you set mappers in Sqoop?

8. How will you export the data to Oracle without putting much load in the table?

9. What is column family in Hbase?

10. Can you create a table without mentioning column family

11.The number of column families limits for one table?

12. How to schedule Spark jobs in your previous project?

13. Explain Spark architecture with a real-time based scenario?



MapR Installation steps on AWS




MapR Installation on Amazon Web Service Machine with simple steps for Hadoop environment.

Step 1: Login with AWS credentials and then open the root machine.

[ec2-user@ip----~]$ sudo su -

Step 2: Put off the IP tables  services

[root@ip---- ~]# service iptables stop

Step 3: Check the configuration of iptables

[root@ip----- ~]# chkconfig iptables off

Step 4: Edit the SELinux configuration

[root@ip----~]# vim /etc/selinux/config

Step 5: EDIT replace enforcing with disabled (save and exit)

[root@ip----~]# SELINUX = disabled

Step 6: Open repos by using below command

[root@ip----~]# cd /etc/yum.repos.d/

Step 7: edit mar ecosystem repo file.

[root@ip----yum.repos.d]# vi mapr_ecosystem.repo

Put the following lines into the above file

[MapR_Ecosystem]
name = MapR Ecosystem Components
baseurl = http://package.mapr.com/releases/MEP/MEP-3.0.4/redhat
gpgcheck = 0
enabled = 1
protected = 1

Step 8: edit mapr installer repo files.

[root@ip----yum.repos.d]# vi mapr_installer.repo

Step 9: Edit mapr core repo files.

[root@ip----yum.repos.d]# vi mapr_core.repo

Put the following lines into the above file

[MapR_Core]
name = MapR Core Components
baseurl = http://archive.mapr.com/releases/v5.0.0/redhat/
gpgcheck = 1
enabled = 1
protected = 1

Step 10: create yum repolist

[root@ip----- yum.repos.d]# yum repolist

(here you will seen all packages)



Step 11: Search mapr package files.

[root@ip------ yum.repos.d]# yum list all | grep mapr

(this displays all packages related to mapr)

Step 12: import rpm package files

[root@ip----- yum.repos.d]# rpm --import

http://package.mapr.com/releases/pub/maprgpg.key

Step 13:  install mapr cldb file server,webserver,resource manager and node manager

[root@ip------ yum.repos.d]# yum install mapr-cldb mapr-fileserver mapr-

webserver mapr-resourcemanager mapr-nodemanager

Step 14: Install mapr Zookeeper

[root@ip------ yum.repos.d]# yum install mapr-zookeeper

Step 15: list of mapr files

[root@ip----- yum.repos.d]# ls -l /opt/mapr/roles/

Step 16: search for mapr rpm files by using files grep command.

[root@ip------ yum.repos.d]# rpm -qa | grep mapr

(displays installed packages related to mapr)

Step 17: Adding Group for mapr system

[root@ip------ yum.repos.d]# groupadd -g 5000 mapr

Step 18: Adding a user for mapr group system

[root@ip------ yum.repos.d]# useradd -g 5000 -u 5000 mapr

Step 19 : Set passwd for mapr user

[root@ip------ yum.repos.d]#passwd mapr

(here you will give password for mapr user)
(you can give any name)

Step 20: create id mapr

[root@ip------ yum.repos.d]# id mapr

Step 21: check Fully Qualified Doman Name using below command

[root@ip------ yum.repos.d]# hostname -f

Step 22: check disk availability

[root@ip------ yum.repos.d]# fdisk -l

(here you have seen available disks in that machine and select the second disk for mapr)

Step 23: Edit second disk information for maprdisk file system.

[root@ip----- yum.repos.d]# vi /root/maprdisk.txt

(here that second disk put here)(save and exit)

Step 24: Set the configuration server in different zones.

[root@ip----- yum.repos.d]# /opt/mapr/server/configure.sh -N training -C ip--------.ap-southeast-1.compute.internal -Z ip------.ap-southeast-1.compute.internal:5181

Step 25: Edit second disk files

[root@ip------ yum.repos.d]# cat /root/maprdisk.txt

Step 26: Download the rpm files

[root@ip------ ~]# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Step 27: Extra package for enterprise linux system

[root@ip------ ~]# rpm -Uvh epel-release-6*.rpm

Step 28: Start Zookeeper services

[root@ip------ ~]# service mapr-zookeeper start

Step 29 :Start warden services

[root@ip-1----- ~]# service mapr-warden start

Step 30: Start MapR CLI NODE CLDB MASTER service

[root@ip----- ~]# maprcli node cldbmaster

Here you will go with your machine ip in web server for mcs..shown below..
example: http://192.168.0.0:8443


Adding Hive Service in MapR




After successful installation of MapR distribution, we need to add services like Hive, Sqoop, Spark, Impala etc. Here we are adding Hive service with simple commands in MapR for Hadoop Environment.

Add Hive Service in MapR :

We must should follow below commands for Hive services:

Step 1: yum install for Hive Mapr.

[root@master1 ~]# yum install mapr-hive mapr-hiveserver2 mapr-hivemetastore mapr-hivewebhcat

Here Loaded plugins like  fastest mirrors, refresh-package kit, security yu
Setting up Install Process is done in this step

Installing below packages of MapR Hiver Services:
mapr – hive noarch
mapr -hivemetastore
mapr-hiveserver2
mapr-hivewebhcat

Step 2:  To install MySQL server for external Database for multiple users.

[root@master1 ~]# yum install MySQL - server

Download below rpm files for MySQL servers:

mysql-5.1.73-8.el6_8.x86_64.rpm
mysql-server-5.1.73-8.el6_8.x86_64.rpm
perl-DBD-MySQL-4.013-3.el6.x86_64.rpm
perl-DBI-1.609-4.el6.x86_64.rpm

Step 3:  Checking of MySQL Status

[root@master1 ~]# service mysqld status

Step 4: Start MySQL service by using below command:

[root@master1 ~]# service mysqld start

After start MySQL services set the password for mysql service

#mysql -u root -p

Step 5: Grant all privileges.

mysql>grant all privileges on *.* to 'your name '@'localhost' identified by 'your name ';

Step 6: Flush all privileges.

mysql>flush privileges;

Step 7: Exit from MySQL cli

mysql>exit

Step 8: Set the hive site .xml file for fully configurations

[root@master1 ~] # vi /opt/mapr/hive/hive-2.1/conf/hive-site.xml
<configuration>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>siva</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value> your name</value>
<description>password to use against metastore database</description>
</property>

<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9089</value>
</property>

</configuration>

Step 9: export the metastotr with port number.

[root @ master1 ~]# export METASTORE_PORT=9089

Step 10: For MySQL DB schema

[root @ master1 ~]# /opt/mapr/hive/hive-2.1/bin/schematool -dbType mysql -initSchema

Step 11: Login with MySQL CLI with your credentials

[root @ master 1 ~]# mysql -u name -p
Enter password:

Step 12: To check databases

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| hive |
| mysql | 
| test |
+--------------------+



Step 13: Exit from MySQL CLI

mysql> exit
Bye

Step 14: Install MySQL connector java file for connection

[root@master1 ~]# yum -y install mysql-connector-java

Step 15: Start Meta store services

[root@master1 ~]# /opt/mapr/hive/hive-2.1/bin/hive --service metastore --start

Step 16: Start Hive services:

[root@master1 ~]# hive
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.



Deliotte Hadoop and Spark Interview Questions



Round 1:

1. Explain about your previous Project?

2. Write the Apache Sqoop code that you are using in your previous project?

3. What is the reason for moving data from DBMS to Hadoop Environment?

4. What happens when you increase mappers in MapReduce?

5. What is the command to check the last value of Apache Sqoop job?

6. Can you explain Distributed Cache?

7. Explain about Hive optimization techniques in your project?

8. Which Hive analytic functions you used in the project?

9. How to update records in Hive table in a single command?

10. How to limit the records when you are consuming the data in Hive table?

11. How to change the Hive engine to Apache Spark engine?

12.Difference between Parquet and ORC file format?

13. How to handle huge data flow situation in your project?

14. Explain about Apache Kafka with architecture?

15. Which tool will create partitions in the Apache Kafka topic?

16. Which transformation and actions are used in your project?

17. Explain a brief idea about Spark Architecture?

18. How will check if data is there or not in the 6th partition in RDD?

19. How do you debug in Spark code in Regex?

20. Give me the idea about a functional programming language?

21.Difference between Map Vs Flat Map in Spark?

22. For example, Spark word count while splitting which one do you use? what happens if you use map instead of flatMap in that program?

23. If you have knowledge on Hadoop Cluster then will you explain about capacity planning for four node cluster?


Round-2

1. Define YARN and MapReduce Architecture?

2. Explain Zookeeper functionalities and give how the flow when the node is down?

3. Explain Data modeling in your project?

4. In your project, reporting tools are used? if you yes then explain it?

5. Give me a brief idea about Broadcast variables in Apache Spark?

6. Can you explain about Agile methodology and give me architecture of Agile?