Spark之submit任務時的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory


Spark submit任務到Spark集群時,會出現如下異常:

Exception 1:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

查看Spark logs文件spark-Spark-org.apache.spark.deploy.master.Master-1-hadoop1.out發現:

此時的Spark Web UI界面如下:

Reason:

之前在Spark配置文件spark-env.sh中,SPARK_LOCAL_IP的配置是localhost,內存為512M,所以在Spark UI界面中Workers顯示均對應到主機hadoop1默認的的localhost,只是給它分配的workers對應不同的端口而已,而之前最大內存為2.9G,所以5 * 512M > 2.9G的,故上述報錯。

Solution:

修改Spark配置文件spark-env.sh,將SPARK_LOCAL_IP的localhost修改為對應的主機名稱(hadoop1,hadoop2...),並修改SPARK_WORKER_MEMORY的內存配置 < 對應機器分配的內存 即可

 

提交任務(WordCount)到Spark集群中,相應腳本如下:

執行腳本,運行Spark任務,過程如下:

./runSpark.sh

 對應的WordCount結果如下:

此時對應的Spark運行UI界面如下:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM