Spark教程
Apache Spark是一個開源集群計算框架。其主要目的是處理實時生成的數據。
Spark建立在Hadoop MapReduce的頂部。它被優化為在內存中運行,而Hadoop的MapReduce等替代方法將數據寫入計算機硬盤驅動器或從計算機硬盤驅動器寫入數據。因此,Spark比其他替代方案更快地處理數據。
Spark架構依賴於兩個抽象:
- 彈性分布式數據集(RDD)
- 有向無環圖(DAG)
scala> val data = sc.parallelize(List(10,20,30)) data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at <console>:24 scala> data.collect res3: Array[Int] = Array(10, 20, 30) scala> val abc = data.filter(x => x!=30)
Spark 編程指南
Spark SQL教程
Spark Overview
Programming Guides:
- Quick Start: a quick introduction to the Spark API; start here!
- RDD Programming Guide: overview of Spark basics - RDDs (core but old API), accumulators, and broadcast variables
- Spark SQL, Datasets, and DataFrames: processing structured data with relational queries (newer API than RDDs)
- Structured Streaming: processing structured data streams with relation queries (using Datasets and DataFrames, newer API than DStreams)
- Spark Streaming: processing data streams using DStreams (old API)
- MLlib: applying machine learning algorithms
- GraphX: processing graphs
API Docs:
Quick Start
RDD Programming Guide
Spark 3.0.1 ScalaDoc
Apache Spark Examples
Additional Examples
Many additional examples are distributed with Spark:
- Basic Spark: Scala examples, Java examples, Python examples
- Spark Streaming: Scala examples, Java examples



史上最簡單的spark系列教程 | 完結