hdfs datanode 啟動失敗


hadoop-root-datanode-ubuntu.log中:
2015-03-12 23:52:33,671 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: Incompatible clusterIDs in /hdfs/name/dfs/data:  namenode clusterID = CID-70d64aad-1dfe-4f87-af15-d53ff80db3dd; datanode clusterID = CID-388a9ec6-cb87-4b0d-97c4-3b4d5c787b76
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
        at java.lang.Thread.run(Thread.java:745)
2015-03-12 23:52:33,680 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2015-03-12 23:52:33,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2015-03-12 23:52:35,790 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-03-12 23:52:35,791 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2015-03-12 23:52:35,792 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ubuntu/127.0.1.1
************************************************************/
 
原因是:
namenode與datanode的clusterID在重新格式化namenode以后已經不再匹配,datanode無法啟動。
另外:
此錯誤會導致在hive導入數據時發生如下錯誤(由於metadata不存在hdfs中,故create table並無報錯):
hive> load data local inpath '/root/dbfile' overwrite into table employees PARTITION (country='US', state='IL');
Loading data to table default.employees partition (country=US, state=IL)
Failed with exception Unable to move source file:/root/dbfile to destination hdfs://localhost:9000/user/hive/warehouse/employees/country=US/state=IL/dbfile
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTas
 
解決方法:
將hdfs存儲數據的所在目錄刪掉,重新格式化hdfs(相關參數: dfs.name.dir  dfs.data.dir):
hadoop namenode -format
 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM