2. 編寫獨立應用程序實現數據去重
對於兩個輸入文件 A 和 B,編寫 Spark 獨立應用程序,對兩個文件進行合並,並剔除其
中重復的內容,得到一個新文件 C。下面是輸入文件和輸出文件的一個樣例,供參考。
輸入文件 A 的樣例如下:
20170101 x
20170102 y
20170103 x
20170104 y
20170105 z
20170106 z
輸入文件 B 的樣例如下:
20170101 y
20170102 y
20170103 x
20170104 z
20170105 y
根據輸入的文件 A 和 B 合並得到的輸出文件 C 的樣例如下:
20170101 x
20170101 y
20170102 y
20170103 x
20170104 y
20170104 z
20170105 y
20170105 z
20170106 z
cd /usr/local/spark/mycode/remdup mkdir -p src/main/scala cd ~ vim /usr/local/spark/mycode/remdup/src/main/scala/remdup.scala
import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf import org.apache.spark.HashPartitioner object RemDup{ def main(args: Array[String]) { val conf = new SparkConf().setAppName("RemDup") val sc = new SparkContext(conf) val dataFile ="file:///usr/local/spark/sparksqldata/A.txt,file:///usr/local/spark/sparksqldata/B.txt" val data = sc.textFile(dataFile,2)
val da = data.distinct()
da.foreach(println)
}
}
vim /usr/local/spark/mycode/remdup/simple.sbt
name := "Simple Project" version := "1.0" scalaVersion := "2.11.8" libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
cd /usr/local/spark/mycode/remdup sudo /usr/local/sbt/sbt package /usr/local/spark/bin/spark-submit --class "RemDup" /usr/local/spark/mycode/remdup/target/scala-2.11/simple-project_2.11-1.0.jar