Apache Spark has its architectural foundation in the resilient distributed dataset (RDD), a read-only
multiset of data items distributed over a cluster of machines, that is maintained in a
fault-tolerant way. The Dataframe API was released as an abstraction on top of the RDD, followed by the Dataset API. In Spark 1.x, the RDD was the primary
application programming interface (API), but as of Spark 2.x use of the Dataset API is encouraged even though the RDD API is not
deprecated. The RDD technology still underlies the Dataset API. Spark and its RDDs were developed in 2012 in response to limitations in the
MapReduce cluster computing
paradigm, which forces a particular linear
dataflow structure on distributed programs: MapReduce programs read input data from disk,
map a function across the data,
reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a
working set for distributed programs that offers a (deliberately) restricted form of distributed
shared memory. Inside Apache Spark the workflow is managed as a
directed acyclic graph (DAG). Nodes represent RDDs while edges represent the operations on the RDDs. Spark facilitates the implementation of both
iterative algorithms, which visit their data set multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated
database-style querying of data. The
latency of such applications may be reduced by several orders of magnitude compared to
Apache Hadoop MapReduce implementation. Among the class of iterative algorithms are the training algorithms for
machine learning systems, which formed the initial impetus for developing Apache Spark. Apache Spark requires a
cluster manager and a
distributed storage system. For cluster management, Spark supports standalone native Spark,
Hadoop YARN,
Apache Mesos or
Kubernetes. A standalone native Spark cluster can be launched manually or by the launch scripts provided by the install package. It is also possible to run the daemons on a single machine for testing. For distributed storage Spark can interface with a wide variety of distributed systems, including
Alluxio,
Hadoop Distributed File System (HDFS),
MapR File System (MapR-FS),
Cassandra,
OpenStack Swift,
Amazon S3,
Kudu,
Lustre file system, or a custom solution can be implemented. Spark also supports a pseudo-distributed local mode, usually used only for development or testing purposes, where distributed storage is not required and the local file system can be used instead; in such a scenario, Spark is run on a single machine with one executor per
CPU core.
Spark Core Spark Core is the foundation of the overall project. It provides distributed task dispatching, scheduling, and basic
I/O functionalities, exposed through an application programming interface (for
Java,
Python,
Scala,
.NET and
R) centered on the RDD
abstraction (the Java API is available for other JVM languages, but is also usable for some other non-JVM languages that can connect to the JVM, such as
Julia). This interface mirrors a
functional/
higher-order model of programming: a "driver" program invokes parallel operations such as map,
filter or reduce on an RDD by passing a function to Spark, which then schedules the function's execution in parallel on the cluster. These operations, and additional ones such as
joins, take RDDs as input and produce new RDDs. RDDs are
immutable and their operations are
lazy; fault-tolerance is achieved by keeping track of the "lineage" of each RDD (the sequence of operations that produced it) so that it can be reconstructed in the case of data loss. RDDs can contain any type of Python, .NET, Java, or Scala objects. Besides the RDD-oriented functional style of programming, Spark provides two restricted forms of shared variables:
broadcast variables reference read-only data that needs to be available on all nodes, while
accumulators can be used to program reductions in an
imperative style. A typical example of RDD-centric functional programming is the following Scala program that computes the frequencies of all words occurring in a set of text files and prints the most common ones. Each , (a variant of ) and takes an
anonymous function that performs a simple operation on a single data item (or a pair of items), and applies its argument to transform an RDD into a new RDD. import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.rdd.RDD val conf: SparkConf = new SparkConf().setAppName("wiki_test") // create a spark config object val sc: SparkContext = new SparkContext(conf) // Create a spark context val data: RDD[String] = sc.textFile("/path/to/somedir") // Read files from "somedir" into an RDD of (filename, content) pairs. val tokens: RDD[String] = data.flatMap(_.split(" ")) // Split each file into a list of tokens (words). val wordFreq: RDD[(String, Int)] = tokens.map((_, 1)).reduceByKey(_ + _) // Add a count of one to each token, then sum the counts per word type. val topWords: Array[(Int, String)] = wordFreq.sortBy(s => -s._2).map(x => (x._2, x._1)).top(10) // Get the top 10 words. Swap word and count to sort by count.
Spark SQL Spark
SQL is a component on top of Spark Core that introduced a data abstraction called DataFrames, which provides support for structured and
semi-structured data. Spark SQL provides a
domain-specific language (DSL) to manipulate DataFrames in
Scala,
Java,
Python or
.NET. However, this convenience comes with the penalty of latency equal to the mini-batch duration. Other streaming data engines that process event by event rather than in mini-batches include
Storm and the streaming component of
Flink. Spark Streaming has support built-in to consume from
Kafka,
Flume,
Twitter,
ZeroMQ,
Kinesis, and
TCP/IP sockets. In Spark 2.x, a separate technology based on Datasets, called Structured Streaming, that has a higher-level interface is also provided to support streaming. Spark can be deployed in a traditional
on-premises data center as well as in the
cloud.
MLlib machine learning library Spark MLlib is a
distributed machine-learning framework on top of Spark Core that, due in large part to the distributed memory-based Spark architecture, is as much as nine times as fast as the disk-based implementation used by
Apache Mahout (according to benchmarks done by the MLlib developers against the
alternating least squares (ALS) implementations, and before Mahout itself gained a Spark interface), and
scales better than
Vowpal Wabbit. Many common machine learning and statistical algorithms have been implemented and are shipped with MLlib which simplifies large scale machine learning
pipelines, including: •
summary statistics,
correlations,
stratified sampling,
hypothesis testing, random data generation •
classification and
regression:
support vector machines,
logistic regression,
linear regression,
naive Bayes classification,
Decision Tree,
Random Forest,
Gradient-Boosted Tree •
collaborative filtering techniques including alternating least squares (ALS) •
cluster analysis methods including
k-means, and
latent Dirichlet allocation (LDA) •
dimensionality reduction techniques such as
singular value decomposition (SVD), and
principal component analysis (PCA) •
feature extraction and
transformation functions •
optimization algorithms such as
stochastic gradient descent,
limited-memory BFGS (L-BFGS)
GraphX GraphX is a distributed
graph-processing framework on top of Apache Spark. Because it is based on RDDs, which are immutable, graphs are immutable and thus GraphX is unsuitable for graphs that need to be updated, let alone in a transactional manner like a
graph database. GraphX provides two separate APIs for implementation of massively parallel algorithms (such as
PageRank): a
Pregel abstraction, and a more general MapReduce-style API. Unlike its predecessor Bagel, which was formally deprecated in Spark 1.6, GraphX has full support for property graphs (graphs where properties can be attached to edges and vertices). Like Apache Spark, GraphX initially started as a research project at UC Berkeley's AMPLab and Databricks, and was later donated to the Apache Software Foundation and the Spark project.
Language support Apache Spark has built-in support for Scala, Java, SQL, R, Python, and Swift with 3rd party support for the .NET CLR, Julia, and more. ==History==