spark之scala程序開發(集群運行模式):單詞出現次數統計



准備工作:

將運行Scala-Eclipse的機器節點(CloudDeskTop)內存調整至4G,因為需要在該節點上跑本地(local)Spark程序,本地Spark程序會啟動Worker進程耗用大量內存資源

其余准備工作可參考:scala程序開發之單詞出現次數統計(本地運行模式)

1、啟動Spark集群

[hadoop@master01 install]$ cat start-total.sh 
#!/bin/bash
echo "請首先確認你已經切換到hadoop用戶"
#啟動zookeeper集群
for node in hadoop@slave01 hadoop@slave02 hadoop@slave03;do ssh $node "source /etc/profile; cd /software/zookeeper-3.4.10/bin/; ./zkServer.sh start; jps";done

#開啟dfs集群
cd /software/ && start-dfs.sh && jps

#開啟spark集群
#啟動master01的Master進程,slave節點的Worker進程
cd /software/spark-2.1.1/sbin/ && ./start-master.sh && ./start-slaves.sh && jps
#啟動master02的Master進程
ssh hadoop@master02 "cd /software/spark-2.1.1/sbin/; ./start-master.sh; jps"

#spark集群的日志服務,一般不開,因為比較占資源
#cd /software/spark-2.1.1/sbin/ && ./start-history-server.sh && cd - && jps
啟動Spark集群的腳本:

查看master的狀態:
[hadoop@master01 software]$ hdfs haadmin -getServiceState nn1
[hadoop@master01 software]$ hdfs haadmin -getServiceState nn2
需要有一個是active,否則使用如下語句進行轉換:
[hadoop@master01 software]$ hdfs haadmin -transitionToActive --forceactive nn1

2、上傳一個測試文件wordcount到HDFS集群

[hadoop@master02 install]$ hdfs dfs -ls /test/scala/input
Found 2 items
-rw-r--r--   3 hadoop supergroup        156 2018-02-07 16:25 /test/scala/input/information
-rw-r--r--   3 hadoop supergroup        156 2018-02-07 16:25 /test/scala/input/information02
[hadoop@master02 install]$ hdfs dfs -cat /test/scala/input/information
zhnag san shi yi ge hao ren
jin tian shi yi ge hao tian qi 
wo zai zhe li zuo le yi ge ce shi 
yi ge guan yu scala de ce shi 
welcome to mmzs
歡迎 歡迎
[hadoop@master02 install]$ hdfs dfs -cat /test/scala/input/information02
zhnag san shi yi ge hao ren
jin tian shi yi ge hao tian qi 
wo zai zhe li zuo le yi ge ce shi 
yi ge guan yu scala de ce shi 
welcome to mmzs
歡迎 歡迎

3、編寫Spark的Job代碼

 1 package com.mmzs.bigdata.spark.rdd.cluster
 2 
 3 import org.apache.spark.SparkConf
 4 import org.apache.spark.SparkContext
 5 import org.apache.spark.rdd.RDD
 6 import org.apache.hadoop.fs.FileSystem
 7 import org.apache.hadoop.conf.Configuration
 8 import java.net.URI
 9 import org.apache.hadoop.fs.Path
10 import org.apache.spark.rdd.RDD.rddToOrderedRDDFunctions
11 import org.apache.spark.rdd.RDD.rddToPairRDDFunctions
12 
13 object WordCount {
14    /*
15    * scala中有private(本類中訪問)、protected(本類和子類中訪問)、默認(public工程內訪問)三種訪問權限
16    * scala類中屬性的默認訪問權限是private
17    * scala中類和方法的默認訪問權限是public,但無需顯式指定public,因為scala中沒有這個關鍵字
18    */
19   var fs:FileSystem=null;
20   
21   //定義一個實例塊
22   {
23       val hconf:Configuration=new Configuration();
24       fs=FileSystem.get(new URI("hdfs://ns1/"), hconf, "hadoop");
25   }
26   
27    /**
28    * 主函數
29    * @param args
30    */
31   def main(args: Array[String]): Unit = {
32     //讀取spark配置文件
33     val conf:SparkConf = new SparkConf();
34     //本地測試模式
35     //conf.setMaster("local");
36     //集群測試模式
37     //conf.setMaster("spark://master01:7077");
38     //設定應用名字
39     conf.setAppName("Hdfs Scala Spark RDD");
40     
41     //根據配置獲取spark上下文對象
42     val sc:SparkContext = new SparkContext(conf);
43     
44     //使用sparkContext讀取文件到內存並生成RDD對象
45     //指定一個輸入目錄即可,目錄中的所有文件都將作為輸入文件
46     val lineRdd:RDD[String] = sc.textFile("/test/scala/input");
47     //使用空格切分每一行的數據為單詞數組,並將單詞數組中的單詞子串釋放到外層的RDD集合中
48     val flatRdd:RDD[String] = lineRdd.flatMap { line => line.split(" "); }
49     //將RDD中的每一個單詞字串轉化為元組,以完成單詞計數
50     val mapRDD:RDD[Tuple2[String, Integer]] = flatRdd.map(word=>(word,1));
51     //按RDD集合中每一個元組的第一個元素(即單詞字串)進行分組並完成單詞計數
52     val reduceRDD:RDD[(String, Integer)] = mapRDD.reduceByKey((pre:Integer, next:Integer)=>pre+next);
53     //交換元素中的key和value的位置便於后續排序
54     val reduceRDD02:RDD[(Integer, String)] = reduceRDD.map(tuple=>(tuple._2,tuple._1));
55     //根據key進行排序,第二個參數表示啟動的Task數量,設置大了可能會拋出內存溢出的異常
56     val sortRDD:RDD[(Integer, String)]  = reduceRDD02.sortByKey(false, 1);
57     //排好序之后將順序換回來
58     val sortRDD02:RDD[(Integer, String)] = reduceRDD.map(tuple=>(tuple._2,tuple._1));
59     
60     //指定一個輸出目錄,如果輸出目錄已經事先存在則應該將它刪除掉
61     val dst:Path=new Path("/test/scala/output/");
62     if(fs.exists(dst)&&fs.isDirectory(dst))fs.delete(dst, true);
63     //將結果保存到指定路徑下
64     reduceRDD.saveAsTextFile("/test/scala/output/");
65     
66     //停止使用spark上下文對象
67     sc.stop();
68   }
69 }
WordCount

4、打包並提交運行Job

4.1、打包Spark代碼

#在Eclipse的工程目錄SparkTest下創建打包目錄jarTest
[hadoop@CloudDeskTop software]$ mkdir -p /project/scala/SparkRDD/jarTest
#將Eclipse中編譯好的bin目錄下的com文件夾打包到工程目錄下的jarTest目錄下
[hadoop@CloudDeskTop software]$ cd /project/scala/SparkTest/bin
[hadoop@CloudDeskTop bin]$ pwd
/project/scala/SparkTest/bin
[hadoop@CloudDeskTop bin]$ jar -cvf /project/scala/SparkRDD/jarTest/wordcount.jar com/

4.2、提交Spark的Job

#切換到Spark安裝目錄下的bin目錄下
[hadoop@CloudDeskTop bin]$ pwd
/software/spark-2.1.1/bin
#提交Job到Spark集群(注意:--master參數值spark://master01:7077不能以斜杠結尾)
[hadoop@CloudDeskTop bin]$ ./spark-submit --master spark://master01:7077 --class com.mmzs.bigdata.spark.rdd.cluster.WordCount /project/scala/SparkRDD/jarTest/wordcount.jar 1

-bash: ./spark-submit: 沒有那個文件或目錄
[hadoop@CloudDeskTop bin]$ cd /software/spark-2.1.1/bin/
[hadoop@CloudDeskTop bin]$ ./spark-submit --master spark://master01:7077 --class com.mmzs.bigdata.spark.rdd.cluster.WordCount /project/scala/SparkRDD/jarTest/wordcount.jar 1
18/02/08 15:21:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/08 15:21:50 INFO spark.SparkContext: Running Spark version 2.1.1
18/02/08 15:21:50 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
18/02/08 15:21:50 INFO spark.SecurityManager: Changing view acls to: hadoop
18/02/08 15:21:50 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/02/08 15:21:50 INFO spark.SecurityManager: Changing view acls groups to: 
18/02/08 15:21:50 INFO spark.SecurityManager: Changing modify acls groups to: 
18/02/08 15:21:50 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/02/08 15:21:51 INFO util.Utils: Successfully started service 'sparkDriver' on port 36034.
18/02/08 15:21:51 INFO spark.SparkEnv: Registering MapOutputTracker
18/02/08 15:21:51 INFO spark.SparkEnv: Registering BlockManagerMaster
18/02/08 15:21:51 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/02/08 15:21:51 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/02/08 15:21:51 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-00930396-c78b-4931-8433-409ea44280ca
18/02/08 15:21:51 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/02/08 15:21:51 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/02/08 15:21:51 INFO util.log: Logging initialized @5920ms
18/02/08 15:21:52 INFO server.Server: jetty-9.2.z-SNAPSHOT
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7ff38263{/jobs,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4bf57335{/jobs/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5f5ec388{/jobs/job,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@467746a2{/jobs/job/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@40be59d2{/stages,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10fb0b33{/stages/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@519c49fa{/stages/stage,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6bbce5f1{/stages/stage/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e9c6879{/stages/pool,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e8f000c{/stages/pool/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4e4c1b4b{/storage,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@66940115{/storage/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7ed33e4f{/storage/rdd,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5e9ff595{/storage/rdd/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@57b439bb{/environment,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@793a50f8{/environment/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@639a07f5{/executors,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@158098e9{/executors/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2db6f406{/executors/threadDump,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@464ecd5c{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5f8c7713{/static,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7eddb166{/,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@ca9e09c{/api,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@64d92842{/jobs/job/kill,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ce238c7{/stages/stage/kill,null,AVAILABLE,@Spark}
18/02/08 15:21:52 INFO server.ServerConnector: Started Spark@7f3e9c42{HTTP/1.1}{0.0.0.0:4040}
18/02/08 15:21:52 INFO server.Server: Started @6335ms
18/02/08 15:21:52 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/02/08 15:21:52 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.154.134:4040
18/02/08 15:21:52 INFO spark.SparkContext: Added JAR file:/project/scala/SparkRDD/jarTest/wordcount.jar at spark://192.168.154.134:36034/jars/wordcount.jar with timestamp 1518074512324
18/02/08 15:21:52 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://master01:7077...
18/02/08 15:21:52 INFO client.TransportClientFactory: Successfully created connection to master01/192.168.154.130:7077 after 74 ms (0 ms spent in bootstraps)
18/02/08 15:21:53 INFO cluster.StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20180208152153-0011
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor added: app-20180208152153-0011/0 on worker-20180208121809-192.168.154.133-49922 (192.168.154.133:49922) with 4 cores
18/02/08 15:21:53 INFO cluster.StandaloneSchedulerBackend: Granted executor ID app-20180208152153-0011/0 on hostPort 192.168.154.133:49922 with 4 cores, 1024.0 MB RAM
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor added: app-20180208152153-0011/1 on worker-20180208121818-192.168.154.132-43679 (192.168.154.132:43679) with 4 cores
18/02/08 15:21:53 INFO cluster.StandaloneSchedulerBackend: Granted executor ID app-20180208152153-0011/1 on hostPort 192.168.154.132:43679 with 4 cores, 1024.0 MB RAM
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor added: app-20180208152153-0011/2 on worker-20180208121826-192.168.154.131-56071 (192.168.154.131:56071) with 4 cores
18/02/08 15:21:53 INFO cluster.StandaloneSchedulerBackend: Granted executor ID app-20180208152153-0011/2 on hostPort 192.168.154.131:56071 with 4 cores, 1024.0 MB RAM
18/02/08 15:21:53 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35028.
18/02/08 15:21:53 INFO netty.NettyBlockTransferService: Server created on 192.168.154.134:35028
18/02/08 15:21:53 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/02/08 15:21:53 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.154.134, 35028, None)
18/02/08 15:21:53 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.154.134:35028 with 366.3 MB RAM, BlockManagerId(driver, 192.168.154.134, 35028, None)
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208152153-0011/0 is now RUNNING
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208152153-0011/2 is now RUNNING
18/02/08 15:21:53 INFO client.StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208152153-0011/1 is now RUNNING
18/02/08 15:21:53 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.154.134, 35028, None)
18/02/08 15:21:53 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.154.134, 35028, None)
18/02/08 15:21:53 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3589f0{/metrics/json,null,AVAILABLE,@Spark}
18/02/08 15:21:55 INFO scheduler.EventLoggingListener: Logging events to hdfs://ns1/sparkLog/app-20180208152153-0011
18/02/08 15:21:55 INFO cluster.StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
18/02/08 15:21:57 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 202.4 KB, free 366.1 MB)
18/02/08 15:21:57 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.8 KB, free 366.1 MB)
18/02/08 15:21:57 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.154.134:35028 (size: 23.8 KB, free: 366.3 MB)
18/02/08 15:21:57 INFO spark.SparkContext: Created broadcast 0 from textFile at WordCount.scala:46
18/02/08 15:21:58 INFO mapred.FileInputFormat: Total input paths to process : 2
18/02/08 15:21:58 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
18/02/08 15:21:58 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
18/02/08 15:21:58 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
18/02/08 15:21:58 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
18/02/08 15:21:58 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
18/02/08 15:21:58 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
18/02/08 15:21:59 INFO spark.SparkContext: Starting job: saveAsTextFile at WordCount.scala:64
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Registering RDD 3 (map at WordCount.scala:50)
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Got job 0 (saveAsTextFile at WordCount.scala:64) with 2 output partitions
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.scala:64)
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:50), which has no missing parents
18/02/08 15:21:59 INFO memory.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.4 KB, free 366.1 MB)
18/02/08 15:21:59 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 366.1 MB)
18/02/08 15:21:59 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.154.134:35028 (size: 2.5 KB, free: 366.3 MB)
18/02/08 15:21:59 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:996
18/02/08 15:21:59 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:50)
18/02/08 15:21:59 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
18/02/08 15:22:07 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.133:42992) with ID 0
18/02/08 15:22:07 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.154.133, executor 0, partition 0, ANY, 6045 bytes)
18/02/08 15:22:07 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.154.133, executor 0, partition 1, ANY, 6047 bytes)
18/02/08 15:22:08 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.131:43045) with ID 2
18/02/08 15:22:08 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.154.133:36839 with 413.9 MB RAM, BlockManagerId(0, 192.168.154.133, 36839, None)
18/02/08 15:22:08 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.154.131:39804 with 413.9 MB RAM, BlockManagerId(2, 192.168.154.131, 39804, None)
18/02/08 15:22:09 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.132:50642) with ID 1
18/02/08 15:22:09 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.154.132:53076 with 413.9 MB RAM, BlockManagerId(1, 192.168.154.132, 53076, None)
18/02/08 15:22:10 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.154.133:36839 (size: 2.5 KB, free: 413.9 MB)
18/02/08 15:22:10 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.154.133:36839 (size: 23.8 KB, free: 413.9 MB)
18/02/08 15:22:19 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 11109 ms on 192.168.154.133 (executor 0) (1/2)
18/02/08 15:22:19 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 11280 ms on 192.168.154.133 (executor 0) (2/2)
18/02/08 15:22:19 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:50) finished in 19.217 s
18/02/08 15:22:19 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
18/02/08 15:22:19 INFO scheduler.DAGScheduler: looking for newly runnable stages
18/02/08 15:22:19 INFO scheduler.DAGScheduler: running: Set()
18/02/08 15:22:19 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)
18/02/08 15:22:19 INFO scheduler.DAGScheduler: failed: Set()
18/02/08 15:22:19 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[8] at saveAsTextFile at WordCount.scala:64), which has no missing parents
18/02/08 15:22:19 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 74.2 KB, free 366.0 MB)
18/02/08 15:22:19 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 27.0 KB, free 366.0 MB)
18/02/08 15:22:19 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.154.134:35028 (size: 27.0 KB, free: 366.2 MB)
18/02/08 15:22:19 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:996
18/02/08 15:22:19 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[8] at saveAsTextFile at WordCount.scala:64)
18/02/08 15:22:19 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
18/02/08 15:22:19 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, 192.168.154.133, executor 0, partition 0, NODE_LOCAL, 5819 bytes)
18/02/08 15:22:19 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, 192.168.154.133, executor 0, partition 1, NODE_LOCAL, 5819 bytes)
18/02/08 15:22:19 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.154.133:36839 (size: 27.0 KB, free: 413.9 MB)
18/02/08 15:22:19 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 192.168.154.133:42992
18/02/08 15:22:19 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 155 bytes
18/02/08 15:22:21 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 1896 ms on 192.168.154.133 (executor 0) (1/2)
18/02/08 15:22:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 1933 ms on 192.168.154.133 (executor 0) (2/2)
18/02/08 15:22:21 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
18/02/08 15:22:21 INFO scheduler.DAGScheduler: ResultStage 1 (saveAsTextFile at WordCount.scala:64) finished in 1.938 s
18/02/08 15:22:21 INFO scheduler.DAGScheduler: Job 0 finished: saveAsTextFile at WordCount.scala:64, took 22.116753 s
18/02/08 15:22:21 INFO server.ServerConnector: Stopped Spark@7f3e9c42{HTTP/1.1}{0.0.0.0:4040}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6ce238c7{/stages/stage/kill,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@64d92842{/jobs/job/kill,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@ca9e09c{/api,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7eddb166{/,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f8c7713{/static,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@464ecd5c{/executors/threadDump/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2db6f406{/executors/threadDump,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@158098e9{/executors/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@639a07f5{/executors,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@793a50f8{/environment/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@57b439bb{/environment,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5e9ff595{/storage/rdd/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7ed33e4f{/storage/rdd,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@66940115{/storage/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4e4c1b4b{/storage,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@e8f000c{/stages/pool/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3e9c6879{/stages/pool,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6bbce5f1{/stages/stage/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@519c49fa{/stages/stage,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@10fb0b33{/stages/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@40be59d2{/stages,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@467746a2{/jobs/job/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5f5ec388{/jobs/job,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4bf57335{/jobs/json,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7ff38263{/jobs,null,UNAVAILABLE,@Spark}
18/02/08 15:22:21 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.154.134:4040
18/02/08 15:22:21 INFO cluster.StandaloneSchedulerBackend: Shutting down all executors
18/02/08 15:22:21 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/02/08 15:22:21 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/02/08 15:22:21 INFO memory.MemoryStore: MemoryStore cleared
18/02/08 15:22:21 INFO storage.BlockManager: BlockManager stopped
18/02/08 15:22:21 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/02/08 15:22:21 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/02/08 15:22:21 INFO spark.SparkContext: Successfully stopped SparkContext
18/02/08 15:22:21 INFO util.ShutdownHookManager: Shutdown hook called
18/02/08 15:22:21 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-b652f8de-76a7-467f-b64d-3e493dfb2195
xshell中運行后的界面效果

4.3、查看運行結果

[hadoop@master02 install]$ hdfs dfs -ls /test/scala/
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2018-02-07 16:25 /test/scala/input
drwxr-xr-x - hadoop supergroup 0 2018-02-08 15:22 /test/scala/output
[hadoop@master02 install]$ hdfs dfs -ls /test/scala/output
Found 3 items
-rw-r--r-- 3 hadoop supergroup 0 2018-02-08 15:22 /test/scala/output/_SUCCESS
-rw-r--r-- 3 hadoop supergroup 134 2018-02-08 15:22 /test/scala/output/part-00000
-rw-r--r-- 3 hadoop supergroup 70 2018-02-08 15:22 /test/scala/output/part-00001
[hadoop@master02 install]$ hdfs dfs -cat /test/scala/output/part-00000
(scala,2)
(zuo,2)
(tian,4)
(shi,8)
(ce,4)
(zai,2)
(歡迎,4)
(wo,2)
(zhnag,2)
(san,2)
(welcome,2)
(yi,8)
(ge,8)
(hao,4)
(qi,2)
(yu,2)
[hadoop@master02 install]$ hdfs dfs -cat /test/scala/output/part-00001
(guan,2)
(jin,2)
(ren,2)
(de,2)
(le,2)
(to,2)
(zhe,2)
(li,2)
(mmzs,2)
[hadoop@master02 install]$
查看hdfs集群上的數據是否生成

5、說明:

對於將Job提交到集群的情況,最好不要直接在Eclipse工程中測試,這種不可預測性太大,容易出現異常,如果需要直接在Eclipse中測試可以設置一下提交的master節點:

//讀取spark配置文件
val conf:SparkConf = new SparkConf();
//集群測試模式
//conf.setMaster("spark://master01:7077");
//設定應用名字
conf.setAppName("Hdfs Scala Spark RDD");
//根據配置獲取spark上下文對象
val sc:SparkContext = new SparkContext(conf);

同時因為Job中涉及到HDFS的文件件操作,這需要連接到HDFS來完成,所以需要將Hadoop的配置文件拷貝到工程的根目錄下

[hadoop@CloudDeskTop software]$ cd hadoop-2.7.3/etc/hadoop/
[hadoop@CloudDeskTop hadoop]$ cp -a core-site.xml hdfs-site.xml /project/scala/SparkTest/src/

完成上述的操作之后就可以在Eclipse中直接測試了,但是經過實踐操作發現這種在IDE環境中提交Job到集群的測試會拋出很多異常(比如mutable.List類型轉換異常等)

  1 log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
  2 log4j:WARN Please initialize the log4j system properly.
  3 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
  4 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  5 18/02/08 13:23:13 INFO SparkContext: Running Spark version 2.1.1
  6 18/02/08 13:23:13 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
  7 18/02/08 13:23:13 INFO SecurityManager: Changing view acls to: hadoop
  8 18/02/08 13:23:13 INFO SecurityManager: Changing modify acls to: hadoop
  9 18/02/08 13:23:13 INFO SecurityManager: Changing view acls groups to: 
 10 18/02/08 13:23:13 INFO SecurityManager: Changing modify acls groups to: 
 11 18/02/08 13:23:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
 12 18/02/08 13:23:14 INFO Utils: Successfully started service 'sparkDriver' on port 33230.
 13 18/02/08 13:23:14 INFO SparkEnv: Registering MapOutputTracker
 14 18/02/08 13:23:14 INFO SparkEnv: Registering BlockManagerMaster
 15 18/02/08 13:23:14 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
 16 18/02/08 13:23:14 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
 17 18/02/08 13:23:14 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7fe67308-7d14-407b-ab2e-a45de1072134
 18 18/02/08 13:23:14 INFO MemoryStore: MemoryStore started with capacity 348.0 MB
 19 18/02/08 13:23:14 INFO SparkEnv: Registering OutputCommitCoordinator
 20 18/02/08 13:23:15 INFO Utils: Successfully started service 'SparkUI' on port 4040.
 21 18/02/08 13:23:15 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.154.134:4040
 22 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://master01:7077...
 23 18/02/08 13:23:15 INFO TransportClientFactory: Successfully created connection to master01/192.168.154.130:7077 after 55 ms (0 ms spent in bootstraps)
 24 18/02/08 13:23:15 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20180208132316-0010
 25 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20180208132316-0010/0 on worker-20180208121809-192.168.154.133-49922 (192.168.154.133:49922) with 4 cores
 26 18/02/08 13:23:15 INFO StandaloneSchedulerBackend: Granted executor ID app-20180208132316-0010/0 on hostPort 192.168.154.133:49922 with 4 cores, 1024.0 MB RAM
 27 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20180208132316-0010/1 on worker-20180208121818-192.168.154.132-43679 (192.168.154.132:43679) with 4 cores
 28 18/02/08 13:23:15 INFO StandaloneSchedulerBackend: Granted executor ID app-20180208132316-0010/1 on hostPort 192.168.154.132:43679 with 4 cores, 1024.0 MB RAM
 29 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20180208132316-0010/2 on worker-20180208121826-192.168.154.131-56071 (192.168.154.131:56071) with 4 cores
 30 18/02/08 13:23:15 INFO StandaloneSchedulerBackend: Granted executor ID app-20180208132316-0010/2 on hostPort 192.168.154.131:56071 with 4 cores, 1024.0 MB RAM
 31 18/02/08 13:23:15 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49476.
 32 18/02/08 13:23:15 INFO NettyBlockTransferService: Server created on 192.168.154.134:49476
 33 18/02/08 13:23:15 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
 34 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208132316-0010/1 is now RUNNING
 35 18/02/08 13:23:15 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208132316-0010/0 is now RUNNING
 36 18/02/08 13:23:15 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.154.134, 49476, None)
 37 18/02/08 13:23:16 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20180208132316-0010/2 is now RUNNING
 38 18/02/08 13:23:16 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.154.134:49476 with 348.0 MB RAM, BlockManagerId(driver, 192.168.154.134, 49476, None)
 39 18/02/08 13:23:16 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.154.134, 49476, None)
 40 18/02/08 13:23:16 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.154.134, 49476, None)
 41 18/02/08 13:23:16 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
 42 18/02/08 13:23:17 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 199.5 KB, free 347.8 MB)
 43 18/02/08 13:23:18 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.5 KB, free 347.8 MB)
 44 18/02/08 13:23:18 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.154.134:49476 (size: 23.5 KB, free: 348.0 MB)
 45 18/02/08 13:23:18 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:46
 46 18/02/08 13:23:19 INFO FileInputFormat: Total input paths to process : 2
 47 18/02/08 13:23:20 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
 48 18/02/08 13:23:20 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
 49 18/02/08 13:23:20 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
 50 18/02/08 13:23:20 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
 51 18/02/08 13:23:20 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
 52 18/02/08 13:23:20 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
 53 18/02/08 13:23:21 INFO SparkContext: Starting job: saveAsTextFile at WordCount.scala:64
 54 18/02/08 13:23:21 INFO DAGScheduler: Registering RDD 3 (map at WordCount.scala:50)
 55 18/02/08 13:23:21 INFO DAGScheduler: Got job 0 (saveAsTextFile at WordCount.scala:64) with 2 output partitions
 56 18/02/08 13:23:21 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.scala:64)
 57 18/02/08 13:23:21 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
 58 18/02/08 13:23:21 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
 59 18/02/08 13:23:21 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:50), which has no missing parents
 60 18/02/08 13:23:21 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.4 KB, free 347.8 MB)
 61 18/02/08 13:23:21 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 347.8 MB)
 62 18/02/08 13:23:21 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.154.134:49476 (size: 2.5 KB, free: 348.0 MB)
 63 18/02/08 13:23:21 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:996
 64 18/02/08 13:23:21 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at WordCount.scala:50)
 65 18/02/08 13:23:21 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
 66 18/02/08 13:23:33 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.131:58462) with ID 2
 67 18/02/08 13:23:33 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.154.131, executor 2, partition 0, ANY, 5987 bytes)
 68 18/02/08 13:23:33 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.154.131, executor 2, partition 1, ANY, 5989 bytes)
 69 18/02/08 13:23:34 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.154.131:39801 with 413.9 MB RAM, BlockManagerId(2, 192.168.154.131, 39801, None)
 70 18/02/08 13:23:34 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.133:50331) with ID 0
 71 18/02/08 13:23:35 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.154.133:54974 with 413.9 MB RAM, BlockManagerId(0, 192.168.154.133, 54974, None)
 72 18/02/08 13:23:36 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.154.131:39801 (size: 2.5 KB, free: 413.9 MB)
 73 18/02/08 13:23:36 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.154.132:39248) with ID 1
 74 18/02/08 13:23:37 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.154.132:51259 with 413.9 MB RAM, BlockManagerId(1, 192.168.154.132, 51259, None)
 75 18/02/08 13:23:37 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.154.131, executor 2): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
 76     at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2083)
 77     at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
 78     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1996)
 79     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
 80     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 81     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 82     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
 83     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
 84     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 85     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 86     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
 87     at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
 88     at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
 89     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
 90     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
 91     at org.apache.spark.scheduler.Task.run(Task.scala:99)
 92     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
 93     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 94     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 95     at java.lang.Thread.run(Thread.java:745)
 96 
 97 18/02/08 13:23:37 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 2, 192.168.154.133, executor 0, partition 1, ANY, 5989 bytes)
 98 18/02/08 13:23:37 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on 192.168.154.131, executor 2: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 1]
 99 18/02/08 13:23:37 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 3, 192.168.154.133, executor 0, partition 0, ANY, 5987 bytes)
100 18/02/08 13:23:38 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.154.133:54974 (size: 2.5 KB, free: 413.9 MB)
101 18/02/08 13:23:38 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 3) on 192.168.154.133, executor 0: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 2]
102 18/02/08 13:23:38 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 4, 192.168.154.131, executor 2, partition 0, ANY, 5987 bytes)
103 18/02/08 13:23:38 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 2) on 192.168.154.133, executor 0: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 3]
104 18/02/08 13:23:38 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 5, 192.168.154.131, executor 2, partition 1, ANY, 5989 bytes)
105 18/02/08 13:23:38 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4) on 192.168.154.131, executor 2: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 4]
106 18/02/08 13:23:38 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 6, 192.168.154.131, executor 2, partition 0, ANY, 5987 bytes)
107 18/02/08 13:23:38 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 5) on 192.168.154.131, executor 2: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 5]
108 18/02/08 13:23:38 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 7, 192.168.154.131, executor 2, partition 1, ANY, 5989 bytes)
109 18/02/08 13:23:38 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 6) on 192.168.154.131, executor 2: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 6]
110 18/02/08 13:23:38 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
111 18/02/08 13:23:39 INFO TaskSchedulerImpl: Cancelling stage 0
112 18/02/08 13:23:39 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
113 18/02/08 13:23:39 INFO TaskSchedulerImpl: Stage 0 was cancelled
114 18/02/08 13:23:39 INFO TaskSetManager: Lost task 1.3 in stage 0.0 (TID 7) on 192.168.154.131, executor 2: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 7]
115 18/02/08 13:23:39 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
116 18/02/08 13:23:39 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:50) failed in 17.103 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 192.168.154.131, executor 2): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
117     at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2083)
118     at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
119     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1996)
120     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
121     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
122     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
123     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
124     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
125     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
126     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
127     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
128     at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
129     at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
130     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
131     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
132     at org.apache.spark.scheduler.Task.run(Task.scala:99)
133     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
134     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
135     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
136     at java.lang.Thread.run(Thread.java:745)
137 
138 Driver stacktrace:
139 18/02/08 13:23:39 INFO DAGScheduler: Job 0 failed: saveAsTextFile at WordCount.scala:64, took 17.644346 s
140 Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 192.168.154.131, executor 2): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
141     at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2083)
142     at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
143     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1996)
144     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
145     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
146     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
147     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
148     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
149     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
150     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
151     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
152     at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
153     at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
154     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
155     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
156     at org.apache.spark.scheduler.Task.run(Task.scala:99)
157     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
158     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
159     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
160     at java.lang.Thread.run(Thread.java:745)
161 
162 Driver stacktrace:
163     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
164     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
165     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
166     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
167     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
168     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
169     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
170     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
171     at scala.Option.foreach(Option.scala:257)
172     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
173     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
174     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
175     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
176     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
177     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
178     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
179     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
180     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
181     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1226)
182     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1168)
183     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1168)
184     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
185     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
186     at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
187     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1168)
188     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1071)
189     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1037)
190     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1037)
191     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
192     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
193     at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
194     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1037)
195     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:963)
196     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:963)
197     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:963)
198     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
199     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
200     at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
201     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:962)
202     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1489)
203     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1468)
204     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1468)
205     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
206     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
207     at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
208     at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1468)
209     at com.mmzs.bigdata.spark.rdd.cluster.WordCount$.main(WordCount.scala:64)
210     at com.mmzs.bigdata.spark.rdd.cluster.WordCount.main(WordCount.scala)
211 Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
212     at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2083)
213     at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
214     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1996)
215     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
216     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
217     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
218     at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
219     at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
220     at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
221     at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
222     at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
223     at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
224     at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
225     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
226     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
227     at org.apache.spark.scheduler.Task.run(Task.scala:99)
228     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
229     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
230     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
231     at java.lang.Thread.run(Thread.java:745)
232 18/02/08 13:23:39 INFO SparkContext: Invoking stop() from shutdown hook
233 18/02/08 13:23:39 INFO SparkUI: Stopped Spark web UI at http://192.168.154.134:4040
234 18/02/08 13:23:39 INFO StandaloneSchedulerBackend: Shutting down all executors
235 18/02/08 13:23:39 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
236 18/02/08 13:23:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
237 18/02/08 13:23:39 INFO MemoryStore: MemoryStore cleared
238 18/02/08 13:23:39 INFO BlockManager: BlockManager stopped
239 18/02/08 13:23:39 INFO BlockManagerMaster: BlockManagerMaster stopped
240 18/02/08 13:23:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
241 18/02/08 13:23:39 INFO SparkContext: Successfully stopped SparkContext
242 18/02/08 13:23:39 INFO ShutdownHookManager: Shutdown hook called
243 18/02/08 13:23:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-0c6925cb-bf3e-4616-94fb-a588da99dee4
博主運行時拋出的異常

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM