Lightning-fast unified analytics engine

闪电般的统一分析引擎

Apache Spark™ is a unified analytics engine for large-scale data processing.

Speed

速度

Run workloads 100x faster.

运行工作负载快100倍。

Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.

Apache Spark通过使用最先进的DAG调度器、查询优化器和物理执行引擎来实现批处理和流数据的高性能。

Logistic regression in Hadoop and Spark

Ease of Use

易用性

Write applications quickly in Java, Scala, Python, R, and SQL.

在Java、Scala、Python、R和SQL中快速编写应用程序。

Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells.

Spark提供了超过80个高级操作符,可以轻松构建并行应用程序。您可以从Scala、Python、R和SQL shell中交互式地使用它。

df = spark. read. json ( "logs.json" ) df. where ( "age > 21" )   . select ( "name.first" ). show ( )
Spark's Python DataFrame API
Read JSON files with automatic schema inference

Generality

普遍性

Combine SQL, streaming, and complex analytics.

结合SQL、流和复杂分析。

Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.

Spark提供了一堆库,包括SQL和DataFrames、用于机器学习的MLlib、GraphX和Spark流。您可以在同一个应用程序中无缝地组合这些库。

Spark SQL Spark Streaming MLlib (machine learning) GraphX

Runs Everywhere

到处跑

Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources.

Spark运行在Hadoop、Apache Mesos、Kubernetes、独立或云中。它可以访问不同的数据源。

You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.

您可以使用它的独立集群模式,在EC2上,在Hadoop纱线上,在Mesos上,或者在Kubernetes上运行Spark。在HDFS、Apache Cassandra、Apache HBase、Apache Hive和数百个其他数据源中访问数据。

Community

社区

Spark is used at a wide range of organizations to process large datasets. You can find many example use cases on the Powered By page.

Spark用于广泛的组织来处理大型数据集。你可以在页面上找到很多例子。

There are many ways to reach the community:

有很多方法可以到达社区:

Contributors

贡献者

Apache Spark is built by a wide set of developers from over 300 companies. Since 2009, more than 1200 developers have contributed to Spark!

Apache Spark是由来自300多家公司的广泛开发人员构建的。自2009年以来,有超过1200位开发者为Spark做出了贡献!

The project's committers come from more than 25 organizations.

该项目的提交人来自超过25个组织。

If you'd like to participate in Spark, or contribute to the libraries on top of it, learn how to contribute.

如果你想参加Spark,或者在它上面贡献一些库,学习如何贡献。

Getting Started

开始

Learning Apache Spark is easy whether you come from a Java, Scala, Python, R, or SQL background:

无论您来自Java、Scala、Python、R还是SQL,学习Apache Spark都很简单: