Spark是一個類似Map-Reduce的集群計算框架,用於快速進行數據分析。
在這個應用中,我們以統計包含"the"字符的行數為案例,.為建立這個應用,我們使用 Spark 1.0.1, Scala 2.10.4 & sbt 0.14.0.
1). 運行 mkdir SimpleSparkProject.
2). 創建一個.sbt 文件,在目錄 SimpleSparkProject/simple.sbt
name := "Simple Project" version := "1.0" scalaVersion := "2.10.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.1" resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
3). 創建代碼文件:SimpleSparkProject/src/main/scala/SimpleApp.scala
package main.scala import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ object SimpleApp { def main(args: Array[String]) { val logFile = "src/data/sample.txt" val sc = new SparkContext("local", "Simple App", "/path/to/spark-1.0.1-incubating", List("target/scala-2.10/simple-project_2.10-1.0.jar")) val logData = sc.textFile(logFile, 2).cache() val numTHEs = logData.filter(line => line.contains("the")).count() println("Lines with the: %s".format(numTHEs)) } }
4). 然后到SimpleSparkProject 目錄
5). 運行 sbt package
6). 運行 sbt run