Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits


以Spark-Client模式運行,Spark-Submit時出現了下面的錯誤:

User:  hadoop  
Name:  Spark Pi  
Application Type:  SPARK  
Application Tags:   
YarnApplicationState:  FAILED  
FinalStatus Reported by AM:  FAILED  
Started:  16-五月-2017 10:03:02  
Elapsed:  14sec  
Tracking URL:  History  
Diagnostics:  Application application_1494900155967_0001 failed 2 times due to AM Container for appattempt_1494900155967_0001_000002 exited with exitCode: -103 
For more detailed output, check application tracking page:http://master:8088/proxy/application_1494900155967_0001/Then, click on links to logs of each attempt. 
Diagnostics: Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.  

意思是說Container要用2.2GB的內存,而虛擬內存只有2.1GB,不夠用了,所以Kill了Container。

我的SPARK-EXECUTOR-MEMORY設置的是1G,即物理內存是1G,Yarn默認的虛擬內存和物理內存比例是2.1,也就是說虛擬內存是2.1G,小於了需要的內存2.2G。解決的辦法是把擬內存和物理內存比例增大,在yarn-site.xml中增加一個設置:

    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.5</value>
    </property>

再重啟Yarn,這樣一來就能有2.5G的虛擬內存,運行時就不會出錯了。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM