Apache Spark is hot. Â Â Spark, a top-level Apache project, is an open source distributed computing framework for advanced analytics in Hadoop. Â Originally developed as a research project at UC Berkeley’sÂ AMPLab, the project achieved incubator status in Apache inÂ June 2013Â and top-level status inÂ February 2014.
Spark seeks to address the critical challenges for advanced analytics in Hadoop. Â First, Spark is designed to support in-memory processing, so developers can write iterative algorithms without writing out a result set after each pass through the data. Â This enables true high performance advanced analytics; for techniques like logistic regression, project sponsors report runtimes in Spark 100X faster than what they are able to achieve with MapReduce.
Second, Spark offers an integrated framework for advanced analytics, including a machine learning library (MLLib); a graph engine (GraphX); a streaming analytics engine (Spark Streaming) and fast interactive query tool (Shark). Â (Update: Â Databricks recently announced Alpha availability ofÂ Spark SQL). Â This eliminates the need to support multiple point solutions, such asÂ Giraph, andÂ GraphLabÂ for graph engines;Â StormÂ andÂ S4Â for streaming; orÂ HiveÂ andÂ ImpalaÂ for interactive queries. Â A single platform simplifies integration, and ensures that users can produce consistent results across different types of analysis.
At Spark’s core is an abstraction layer calledÂ Resilient Distributed Datasets, or RDDs. Â RDDs are read-only partitioned collections of records created through deterministic operations on stable data or other RDDs. Â RDDs include information about data lineage together with instructions for data transformation and (optional) instructions for persistence. Â They are designed to be fault tolerant, so that if an operation fails it can be reconstructed.
ForÂ data sources, Spark works with any file stored in HDFS, or any other storage system supported by Hadoop (including local file systems, Amazon S3, Hypertable and HBase). Â Hadoop supports text files, SequenceFiles and any other Hadoop InputFormat.
Spark’s machine learning library, MLLib, is rapidly growing. Â In the latest release it includes linear support vector machines and logistic regression for binary classification; linear regression; k-means clustering; and alternating least squares for collaborative filtering. Â Linear regression, logistic regression and support vector machines are all based on a gradient descent optimization algorithm, withÂ options for L1 and L2 regularization. Â MLLib is part of a larger machine learning project (MLBase), which includes an API for feature extraction and an optimizer (currently in development with planned release in 2014).
GraphX, Spark’s graph engine, combines the advantages of data-parallel and graph-parallel systems by efficiently expressing graph computation within the Spark framework. Â It enables users to interactively load, transform, and compute on massive graphs. Â Project sponsors report performance comparable to Apache Giraph, but in a fault tolerant environment that is readily integrated with other advanced analytics.
Spark Streaming offers an additional abstraction called discretized streams, or DStreams. Â DStreams are a continuous sequence of RDDs representing a stream of data; they are created from live incoming data or generated by transforming other DStreams. Â Spark receives data, divides it into batches, then replicates the batches for fault tolerance and persists them in memory where they are available for mathematical operations.
Currently, Spark supports programming interfaces for Scala,Â JavaÂ andÂ Python. Â For R users, the team at Berkeley’s AMPLabÂ releasedÂ a developer preview of SparkR in January.
There is an active and growing developer community for Spark; 83 developers contributed to Release 0.9. Â In the past six months, developers contributed more commits to Spark than to all of the other Apache analytics projects combined. Â In 2013, the Spark project published seven double-dot releases, including Spark 0.8.1 published on December 19; this release included YARN 2.2 support, high availability mode for cluster management, performance optimizations and improvements to the machine learning library and Python interface. Â The Spark team released 0.9.0 inÂ February, 2014, and 0.9.1, a maintenance release, inÂ April, 2014. Â Release 0.9Â includesÂ Scala 2.10 support, a configuration library, improvements to Spark Streaming, the Alpha release for GraphX, enhancements to MLLib and many other enhancements).
In a nod to Spark’s rapid progress, Cloudera announced immediate support for Spark inÂ February. Â MapR recentlyÂ announcedÂ that it will distribute the complete Spark stack, including Shark (Cloudera does not distribute Shark). Â Hortonworks also recently announced plans to distribute Spark for machine learning, though it plans to stick with Storm for streaming analytics and Giraph for graph engines. Â Databricks offers aÂ certificationÂ program for Spark; participants currently includeÂ Adatao,Â Alpine Data Labs,Â ClearStoryÂ andÂ Tresata.)
In December, the firstÂ Spark SummitÂ attracted more than 450 participants from more than 180 companies. Â Presentations covered a range of applications such asÂ neuroscience,Â audience expansion,Â real-time network optimizationÂ andÂ real-time data center management, together with a range of technical topics. Â The 2014 Spark Summit will be held in San Francisco this June 30-July 2.
In recognition of Spark’s rapid development, on February 27 ApacheÂ announcedÂ that Spark is a top-level project.Â Developers expect to continue adding machine learning features and to simplify implementation. Â Together with an R interface and commercial support, we can expect continued interest and application for Spark. Â Enhancements are coming rapidly — expect more announcements before the Spark Summit.