Spark on YARN兩種運行模式介紹


本文出自:Spark on YARN兩種運行模式介紹
http://www.aboutyun.com/thread-12294-1-1.html
(出處: about雲開發)
 

問題導讀

1.Spark在YARN中有幾種模式?
2.Yarn Cluster模式,Driver程序在YARN中運行,應用的運行結果在什么地方可以查看?
3.由client向ResourceManager提交請求,並上傳jar到HDFS上包含哪些步驟?
4.傳遞給app的參數應該通過什么來指定?
5.什么模式下最后將結果輸出到terminal中?

 

Spark在YARN中有yarn-cluster和yarn-client兩種運行模式:

1.Yarn Cluster


Spark Driver首選作為一個ApplicationMaster在Yarn集群中啟動,客戶端提交給ResourceManager的每一個job都會在集群的worker節點上分配一個唯一的ApplicationMaster,

由該ApplicationMaster管理全生命周期的應用。因為Driver程序在YARN中運行,所以事先不用啟動Spark Master/Client,應用的運行結果不能再客戶端顯示(可以在history server中查看)

,所以最好將結果保存在HDFS而非stdout輸出,客戶端的終端顯示的是作為YARN的job的簡單運行狀況。

by @Sandy Ryza

by 明風@taobao

從terminal的output中看到任務初始化更詳細的四個步驟:

14/09/28 11:24:52 INFO RMProxy: Connecting to ResourceManager at hdp01/172.19.1.231:8032
14/09/28 11:24:52 INFO Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 4
14/09/28 11:24:52 INFO Client: Queue info ... queueName: root.default, queueCurrentCapacity: 0.0, queueMaxCapacity: -1.0,
queueApplicationCount = 0, queueChildQueueCount = 0
14/09/28 11:24:52 INFO Client: Max mem capabililty of a single resource in this cluster 8192
14/09/28 11:24:53 INFO Client: Uploading file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar to hdfs://hdp01:8020/user/spark/.sparkStaging/application_1411874193696_0003/spark-examples_2.10-1.0.0-cdh5.1.0.jar
14/09/28 11:24:54 INFO Client: Uploading file:/usr/lib/spark/assembly/lib/spark-assembly-1.0.0-cdh5.1.0-hadoop2.3.0-cdh5.1.0.jar to hdfs://hdp01:8020/user/spark/.sparkStaging/application_1411874193696_0003/spark-assembly-1.0.0-cdh5.1.0-hadoop2.3.0-cdh5.1.0.jar
14/09/28 11:24:55 INFO Client: Setting up the launch environment
14/09/28 11:24:55 INFO Client: Setting up container launch context
14/09/28 11:24:55 INFO Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.master="spark://hdp01:7077", -Dspark.app.name="org.apache.spark.examples.SparkPi", -Dspark.eventLog.enabled="true", -Dspark.eventLog.dir="/user/spark/applicationHistory", -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ApplicationMaster, --class, org.apache.spark.examples.SparkPi, --jar , file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar, , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
14/09/28 11:24:55 INFO Client: Submitting application to ASM
14/09/28 11:24:55 INFO YarnClientImpl: Submitted application application_1411874193696_0003
14/09/28 11:24:56 INFO Client: Application report from ASM:
application identifier: application_1411874193696_0003
appId: 3
clientToAMToken: null
appDiagnostics:
appMasterHost: N/A
appQueue: root.spark
appMasterRpcPort: -1
appStartTime: 1411874695327
yarnAppState: ACCEPTED
distributedFinalState: UNDEFINED
appTrackingUrl: http://hdp01:8088/proxy/application_1411874193696_0003/
appUser: spark

1.由client向ResourceManager提交請求,並上傳Jar到HDFS上

這期間包括四個步驟:

a).連接到RM

b).從RM ASM(applicationsManager)中獲得metric,queue和resource等信息。

c).upload app jar and spark-assembly jar

d).設置運行環境和container上下文

2.ResourceManager向NodeManager申請資源,創建Spark ApplicationMaster(每個SparkContext都有一個ApplicationManager)

3.NodeManager啟動Spark App Master,並向ResourceManager ASM注冊

4.Spark ApplicationMaster從HDFS中找到jar文件,啟動DAGScheduler和YARN Cluster Scheduler

5.ResourceManager向ResourceManager ASM注冊申請container資源(INFO YarnClientImpl: Submitted application)

6.ResourceManager通知NodeManager分配Container,這是可以收到來自ASM關於container的報告。(每個container的對應一個executor)

7.Spark ApplicationMaster直接和container(executor)進行交互,完成這個分布式任務。

需要注意的是:
a). Spark中的localdir會被yarn.nodemanager.local-dirs替換
b). 允許失敗的節點數(spark.yarn.max.worker.failures)為executor數量的兩倍數量,最小為3.
c). SPARK_YARN_USER_ENV傳遞給spark進程的環境變量
d). 傳遞給app的參數應該通過–args指定

II. yarn-client

(YarnClientClusterScheduler)查看對應類的文件

在Yarn-client模式下,Driver運行在Client上,通過ApplicationMaster向RM獲取資源。本地Driver負責與所有的executor container進行交互,並將最后的結果匯總。結束掉終端,相當於kill掉這個spark應用。一般來說,如果運行的結果僅僅返回到terminal上時需要配置這個。

客戶端的Driver將應用提交給Yarn后,Yarn會先后啟動ApplicationMaster和excutor,另外ApplicationMaster和executor都裝在在container里運行,container默認的內存是1g,ApplicationMaster分配的內存是driver-memory,executor分配的內存是executor-memory.同時,因為Driver在客戶端,所以程序的運行結果可以在客戶端顯示,Driver以進程名為SparkSubmit的形式存在。

配置Yarn-client模式統一需要HADOOP_CONF_DIR/YARN_CONF_DIR和SPARK_JAR變量

提交任務測試:

spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode client /usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar
terminal output:

14/09/28 11:18:34 INFO Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.tachyonStore.folderName="spark-9287f0f2-2e72-4617-a418-e0198626829b", -Dspark.eventLog.enabled="true", -Dspark.yarn.secondary.jars="", -Dspark.driver.host="hdp01", -Dspark.driver.appUIHistoryAddress="", -Dspark.app.name="Spark Pi", -Dspark.jars="file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar", -Dspark.fileserver.uri="http://172.19.17.231:53558", -Dspark.eventLog.dir="/user/spark/applicationHistory", -Dspark.master="yarn-client", -Dspark.driver.port="35938", -Dspark.httpBroadcast.uri="http://172.19.17.231:43804", -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null, --args 'hdp01:35938' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
14/09/28 11:18:34 INFO Client: Submitting application to ASM
14/09/28 11:18:34 INFO YarnClientSchedulerBackend: Application report from ASM:
appMasterRpcPort: -1
appStartTime: 1411874314198
yarnAppState: ACCEPTED
......

最后將結果輸出到terminal中


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM