配置文件:
pom.xml
<properties>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
</properties>
<repositories>
<!--添加cloudera倉庫依賴, CDH版本是cloudera倉庫下的-->
<repository>
<id>cloudera</id>
<name>cloudera</name>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
</repositories>
<dependencies>
<!--添加scala依賴-->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!--添加spark-code的依賴,scala版本2.11-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!--添加hadoop-client的依賴-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
測試代碼:
傳入參數:

WordCountApp.scala
package com.ruozedata
import org.apache.spark.{SparkConf, SparkContext}
object WordCountApp extends App {
val conf = new SparkConf()
val sc = new SparkContext(conf)
//輸入(用args()傳入參數,非硬編碼)
val dataFile = sc.textFile(args(0))
//業務邏輯
val outputFile = dataFile.flatMap(_.split(",")).map((_,1)).reduceByKey(_+_)
//輸出文件
outputFile.saveAsTextFile(args(1))
//關閉流(輸入流)
sc.stop()
}
CLI中測試:

打包提交到服務器並執行:








Linux下本地模式提交到服務器: (在腳本中配置)
$ /home/hadoop/app/spark/bin/spark-submit \
--class com.ruozedata.WordCountApp \
--master local[2] \
--name WordCountApp \
/home/hadoop/lib/spark/SparkCodeApp-1.0.jar \
/wc_input/ /wc_output
具體配置參考Spark官網:
http://spark.apache.org/docs/2.2.0/rdd-programming-guide.html
http://spark.apache.org/docs/2.2.0/configuration.html
http://spark.apache.org/docs/2.2.0/submitting-applications.html