spark提交至yarn的的動態資源分配


1、為什么開啟動態資源分配

⽤戶提交Spark應⽤到Yarn上時,可以通過spark-submit的num-executors參數顯示地指定executor 個數,隨后,ApplicationMaster會為這些executor申請資源,每個executor作為⼀個Container在 Yarn上運⾏。Spark調度器會把Task按照合適的策略分配到executor上執⾏。所有任務執⾏完后, executor被殺死,應⽤結束。在job運⾏的過程中,⽆論executor是否領取到任務,都會⼀直占有着 資源不釋放。很顯然,這在任務量⼩且顯示指定⼤量executor的情況下會很容易造成資源浪費

2.yarn-site.xml加入配置,並重啟yarn服務

spark版本:2.2.1,hadoop版本:cdh5.14.2-2.6.0,不是clouder集成的cdh是手動單獨搭建的

vim etc/hadoop/yarn-site.xml

<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>spark_shuffle,mapreduce_shuffle</value>
 </property> <property>
   <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
   <value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>

重啟yarn時的需要注意的異常:nodemanager沒有正常啟動,yarn的8080頁面的core與memory都為空

Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2349)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2373)
    ... 10 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2255)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2347)
    ... 11 more
2020-02-17 19:54:59,185 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system...
2020-02-17 19:54:59,185 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped.
2020-02-17 19:54:59,185 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
2020-02-17 19:54:59,185 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2381)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:121)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:236)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:318)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:562)
    at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:609)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2349)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2373)
    ... 10 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2255)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2347)
    ... 11 more
2020-02-17 19:54:59,189 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at bigdata.server1/192.168.121.12
************************************************************/

原因是缺少了:sparkShuffle的jar包

mv  spark/yarn/spark-2.11-2.2.1-shuffle_.jar /opt/modules/hadoop-2.6.0-cdh5.14.2/share/hadoop/yarn/

nodemanager依然啟動不了,查詢nodemanger.log日志、繼續報錯:

java.lang.NoSuchMethodError: org.spark_project.com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z

添加了jackson的包沒啥用,網上有一樣的報錯方式:https://www.oschina.net/question/3721355_2269200

結果:未解決

3.spark的動態資源分配開啟

可以在spark-defaults.conf中添加了如下配置:
spark.shuffle.service.enabled true //啟⽤External shuffle Service服務
spark.shuffle.service.port 7337 //Shuffle Service服務端⼝,必須和yarn-site中的⼀致
spark.dynamicAllocation.enabled true //開啟動態資源分配
spark.dynamicAllocation.minExecutors 1 //每個Application最⼩分配的executor數
spark.dynamicAllocation.maxExecutors 30 //每個Application最⼤並發分配的executor數
spark.dynamicAllocation.schedulerBacklogTimeout 1s
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 5s

也可以在代碼或者腳本中添加sparkconf

4.hadoop版本cdh5.14.2-2.6.0與spark 2.2.1單獨搭建的會報錯

5.使用clouderCDH 5.14.0 的版本測試

1.在yarn-site.xml添加上面的配置

1

 

 

 2.普通提交,spark2版本進行shell提交,觀察yarn

spark2-shell --master yarn-client \
 --executor-memory 2G \
 --num-executors 10

 可以看到10個executor(driver占一核)沒有任務也是申請到資源,占着不用,造成了資源浪費

3.使用spark的動態資源分配提交

spark2-shell —master yarn —eploy-mode client \
//指定隊列
—queue "test" \
//日志配置
—conf spark.driver.extraJava0ptions=-Dlog4j.configuration=log4j-yarn.properties \
—conf spark.executor.extraJava0ptions=-Dlog4j.configuration=log4j-yarn.properties \
—conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
//推測執行等待時間
—conf spark.locality.wait=10 \
//最大失敗重試次數
—conf spark.task.maxFailures=8 \
—conf spark.ui.killEnabled=false \
—conf spark.logConf=true \
//非堆內存配置
—conf spa rk.yarn.d river.memoryOverhead=512 \
—conf spark.yarn.executor.memoryOverhead=1024 \
—conf spark.yarn.maxAppAttempts=4 \
—conf spark.yarn.am.attemptFailuresValidityInterval=lh \
—conf spark.yarn.executor.failuresValidityInterval=lh \
//動態資源開啟
—conf spark.dynamicAllocation.enabled=true \
//最大最小申請的Executors數
—conf spark.dynamicAllocation.minExecutors=l \
—conf spark.dynamicAllocation.maxExecutors=30 \
—conf spark.dynamicAllocation.executorldleTimeout=3s \
—conf spark.shuffle.service.enabled=true

 可以看到申請的只有1個executor(driver端的),暫時沒有提交任務,最小申請為1個,

sc.textFile("file:///etc/hosts").flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_ + _).count()

  提交一個wrodcount程序跑一下,發現使用2個executor(1個driver),說明這種數量級的數據就2個就可以滿足了,不需要開啟更多的資源去空轉,占用 

 

 4.動態資源的好處

1.多個部門去使用集群資源,有運行的任務時候申請資源,沒有時將資源回收給yarn,供其他人使用

2.防止小數據申請大資源,造成資源浪費,executor空轉

3.在進行流式處理時不建議開啟,流式處理的數據量在不同時段是不同的,需要最大利用資源,從而提高消費速度,以免造成數據堆積,流式處理時如果一直去判斷數據量的大小進行動態申請時,創建與銷毀資源也需要時間,從而讓流式處理造成了延遲

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM