Apache Spark is an in-memory cluster computing framework for processing and analyzing a large amount of data. Spark provides a simple programming interface, which enables an application developer to easily to use Memory, CPU and storage resources across the cluster of servers for processing in large data sets.
Spark also Open Source distributed framework for big data processing written in Java, Python, Scala, and R languages simply.
Key Features of Spark:
2.In General Purpose
Spark data fits in the memory, it is 100 times faster than Map Reduce.
Spark is faster than Hadoop Map Reduce for two reasons:
1. It implements an advanced execution ending
2. It allows in-memory cluster computing
Spark does not automatically cache input data in memory. A common misconception is that Spark cannot be used if input data does not fit in memory. It is not true.
Spark can process terabytes of data on a cluster that may have only 100 GB total cluster memory.
2. In General Purpose:
Apache Spark provides for different types of data processing jobs. It can be used for:
Spark is scalable because the data processing capacity of a spark cluster can be increased by just adding more nodes to a cluster. No code change is required when you add a node to a Spark Cluster.
Apache Spark is Fault Tolerant because, in a cluster of a few hundred nodes, the probability of a node failing on any given day is high. The hard disk may crash or some other hardware problem.
So Spark automatically handles the failure of a node in a cluster.
Here the basic difference between Spark and Hadoop Map Reduce: