1,hdfs namenode –format 格式化 用這個:hadoop namenode -format
一些 節點的 datanode 啟動不了,說明 以前節點中的日志沒刪除干凈,重新刪除。http://blog.csdn.net/lulongzhou_llz/article/details/40590427;
其實很簡單,針對新加入節點以后,只要把以前節點中的dfs中name、data文件夾中的數據全部刪除即可。
執行start-dfs.sh后,datenode沒有啟動
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/hdfs/data: namenode clusterID = CID-af6f15aa-efdd-479b-bf55-77270058e4f7; datanode clusterID = CID-736d1968-8fd1-4bc4-afef-5c72354c39ce
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:472)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:744)
從日志中可以看出,原因是因為datanode的clusterID 和 namenode的clusterID 不匹配。
打開hdfs-site.xml里配置的datanode和namenode對應的目錄,分別打開current文件夾里的VERSION,可以看到clusterID項正如日志里記錄的一樣,確實不一致,修改datanode里VERSION文件的clusterID 與namenode里的一致,再重新啟動dfs(執行start-dfs.sh)再執行jps命令可以看到datanode已正常啟動。
出現該問題的原因:在第一次格式化dfs后,啟動並使用了hadoop,后來又重新執行了格式化命令(hdfs namenode -format),這時namenode的clusterID會重新生成,而datanode的clusterID 保持不變。
2,http://www.netfoucs.com/article/manburen01/94862.html;
一、啟動HBase
在Namenode節點上執行start-hbase.sh后,HMaster啟動了,但是過幾秒鍾就掛了,
查看日志報錯:
[master:master:60000] catalog.CatalogTracker: Failed verification of hbase:meta,,1 at address=node3,60020,1409104234032, exception=org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2612)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4003)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HRegionServer.java:3395)
at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20036)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:120)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:230)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:85)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1060)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:921)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:607)
at java.lang.Thread.run(Thread.java:745)
[main-EventThread] wal.HLogSplitter: Archived processed log hdfs://master:9000/hbase/WALs/node4,60020,1409104233517-splitting/node4%2C60020%2C1409104233517.1409104239901 to hdfs://master:9000/hbase/oldWALs/node4%2C60020%2C1409104233517.1409104239901
2014-08-27 13:44:30,805 WARN [MASTER_SERVER_OPERATIONS-master:60000-1] master.SplitLogManager: Stopped while waiting for log splits to be completed
2014-08-27 13:44:30,806 WARN [MASTER_SERVER_OPERATIONS-master:60000-1] master.SplitLogManager: error while splitting logs in [hdfs://master:9000/hbase/WALs/node1,60020,1409104233856-splitting] installed = 1 but only 0 done
java.io.IOException: failed log splitting for node1,60020,1409104233856, will retry
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:326)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:206)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://master:9000/hbase/WALs/node1,60020,1409104233856-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:362)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:409)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:383)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:281)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:199)
... 4 more
2014-08-27 13:44:30,808 INFO [main-EventThread] master.SplitLogManager: Done splitting /hbase/splitWAL/WALs%2Fnode4%2C60020%2C1409104233517-splitting%2Fnode4%252C60020%252C1409104233517.1409104239901
2014-08-27 13:44:30,807 ERROR [MASTER_SERVER_OPERATIONS-master:60000-4] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: Server is stopped
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:187)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2014-08-27 13:44:30,817 ERROR [MASTER_SERVER_OPERATIONS-master:60000-2] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for node3,60020,1409104234032, will retry
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:326)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:206)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://master:9000/hbase/WALs/node3,60020,1409104234032-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:362)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:409)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:383)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:281)
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:199)
... 4 more
2014-08-27 13:44:30,817 DEBUG [MASTER_SERVER_OPERATIONS-master:60000-0] master.DeadServer: Finished processing node3,60020,1409104234032
2014-08-27 13:44:30,818 ERROR [MASTER_SERVER_OPERATIONS-master:60000-0] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: Server is stopped
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:187)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2014-08-27 13:44:30,859 INFO [main-EventThread] master.SplitLogManager: task /hbase/splitWAL/WALs%2Fnode1%2C60020%2C1409104233856-splitting%2Fnode1%252C60020%252C1409104233856.1409104237844 entered state: DONE node3,60020,1409118244326
2014-08-27 13:44:31,001 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2799)
解決辦法:
1.系統防火牆開啟后主機ip對應主機名解析有問題,需要刪除Hbase 的tmp文件夾重啟(每個節點都要操作)
將/etc/hosts中的127.0.0.1 psyDebian刪除(從節點對應也刪除)后程序運行正常。接着嘗試運行HBase,沒有出現問題!創建表也正常了!
一開始也知道得刪除hosts文件中127.0.1.1,但是沒想到127.0.0.1 主機名也得刪除。
2.hadoop 集群進入了safe model 模式,需要執行hadoop dfsadmin -safemode leave退出安全模式
3.存儲在Hbase的數據有丟失,需要利用hadoop的回收站的機制恢復數據,或者刪除HBase的數據
解決辦法:
1.系統防火牆開啟后主機ip對應主機名解析有問題,需要刪除Hbase 的tmp文件夾重啟(每個節點都要操作)
將/etc/hosts中的127.0.0.1 psyDebian刪除(從節點對應也刪除)后程序運行正常。接着嘗試運行HBase,沒有出現問題!創建表也正常了!
一開始也知道得刪除hosts文件中127.0.1.1,但是沒想到127.0.0.1 主機名也得刪除。
2.hadoop 集群進入了safe model 模式,需要執行hadoop dfsadmin -safemode leave退出安全模式
3.存儲在Hbase的數據有丟失,需要利用hadoop的回收站的機制恢復數據,或者刪除HBase的數據
3,常用操作:http://blog.csdn.net/name_xiaoai/article/details/21812153;
4,修改一下yarn-site.xml 去除其中的value之間的空格即可,不要有空。
5,Add these lines to your .bashrc or .bash_profile:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I've also added these two environment variables in haddop-env.sh:
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
In my case , after I build hadoop on my 64 bit Linux mint OS, I replaced the native library in hadoop/lib. Still the problem persist. Then i figured out the hadoop pointing to hadoop/lib not to the hadoop/lib/native. So I just moved all content from native library to its parent. And the waring just gone.
I had the same issue. It's solved by adding following lines in .bashrc:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
just append word native to your HADOOP_OPTS like this
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/native"
export HADOOP_PREFIX=$HADOOP_HOME