Hive [Error 10293]: Unable to create temp file for insert values File 報錯解決


Scenario:

  • Hadoop、Hive安裝、配置好之后,
  • 創建表成功(表名:student)
  • hive> select * from student 不報錯
  • hive> insert into table student values (101, 'leo'); 報錯:

FAILED: SemanticException [Error 10293]: Unable to create temp file for insert values File /tmp/hive/root/63412761-6637-49eb-8a22-d06563b9b6ad/_tmp_space.db/Values__Tmp__Table__1/data_file could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1625)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3127)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3051)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

 

Investigation:

1. 自查,發現hive/conf/里,發現hive-env.sh.template沒有修改為hive-env.sh    -- 囧,低級錯誤

重新操作hive> insert …… 后發現問題並沒有解決!但響應metadata創建了(剛才./hive並沒有創建)

但是,問題仍未解決

2. 上網查,建議hadoop刪除tmp目錄(hadoop fs -rm -r /tmp) ====> 格式化datanode(hadoop datanode -format)====> 創建/tmp目錄(hadoop fs -mkdir /tmp)====> 重啟hive,

無效,問題仍未解決

3. 還是上網查,發現有高手提出因為防火牆未關閉,導致master看不到slave上的node,對比報錯提示“There are 0 datanode(s) running and no node(s) ”,確實感覺hadoop沒有識別到對應node,

因此關閉master和slave上的防火牆(systemctl stop firewalld)

重新進入hive,insert操作成功!

 

Resolution:

1. 報錯

 

2. 關閉master和slave的防火牆 systemctl stop firewalld

 

 

 

 3.解決,insert執行成功

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM