HIVE報錯:Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)


執行 insert into table video_orc select * from video_ori;時報錯

查看hive日志發現具體報錯信息如下:

2020-10-07T09:33:11,117  INFO [HiveServer2-Background-Pool: Thread-241] ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-07T09:33:11,119 ERROR [HiveServer2-Background-Pool: Thread-241] operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
        at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) ~[hive-service-3.1.2.jar:3.1.2]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_212]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) ~[hadoop-common-3.1.3.jar:?]
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) ~[hive-service-3.1.2.jar:3.1.2]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_212]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_212]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_212]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_212]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
2020-10-07T09:33:11,120  INFO [HiveServer2-Handler-Pool: Thread-52] conf.HiveConf: Using the default value passed in for log id: 544f2b3d-7a09-44a1-bf44-f25c7b2ad6e4

分析:

經過各個方法實驗,無法找到具體報錯信息,無法判斷具體原因

偶然看到一個貼子說把同樣的sql放到hive shell里面執行(本來我是在beeline -u jdbc:hive2://Linux201:10000 -n zls下執行,得到的報錯信息不完全)

即直接在hive(defualt)下執行

發現報錯信息如下:

running beyond virtual memory錯誤。

提示如下:

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.…………

原因:從機上運行的Container試圖使用過多的內存,而被NodeManager kill掉了。

解決:加內存

在hadoop的配置文件mapred-site.xml中設置map和reduce任務的內存配置如下:(value中實際配置的內存需要根據自己機器內存大小及應用情況進行修改)

<property>
  <name>mapreduce.map.memory.mb</name>
  <value>1536</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx1024M</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>3072</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx2560M</value>
</property>

關掉集群,分發配置,啟動集群

再次運行,🆗

總結:沒啥好總結的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM