Hadoop 3.3.0問題解決匯總


啟動 HDFS

[zhangbin@hadoop102 hadoop-3.3.0]$  sbin/start-dfs.sh

異常信息:

hadoop102: ERROR: Cannot set priority of namenode process 35346

解決方法:

 切換到hadoop日志目錄:

 [zhangbin@hadoop102 hadoop-3.3.0]$ cd logs/

   查看日志信息:

    [zhangbin@hadoop102 logs]$ cat hadoop-zhangbin-namenode-hadoop102.log

    異常信息大體為這樣:

2021-04-05 10:21:36,656 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop102:9870
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1292)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1314)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1373)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1223)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:946)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:757)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1014)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:987)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1756)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1821)
Caused by: java.io.IOException: Failed to bind to hadoop102/192.168.10.102:9870
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1279)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1310)
... 9 more
Caused by: java.net.BindException: 地址已在使用
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 12 more

 

解決方法:找到到占用端口,並殺死

[zhangbin@hadoop102 logs]$ lsof -i:9870 -P

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 22003 zhangbin 289u IPv4 89125 0t0 TCP hadoop102:9870 (LISTEN)

[zhangbin@hadoop102 logs]$ kill -9 22003

重新啟動即可

 

DataNode和NameNode進程同時只能有一個工作(啟動集群之后,DataNode無法啟動)
 
原因->借用一張圖(分析原因):

 

 

解決辦法:

  停止所有hadoop節點的所有服務

  刪除每個服務節點(服務器)中的DataNode里面的信息(默認在/tmp,我的在DataNode信息配置在了hadoop項目根目錄的data中, 並且刪除hadoop下的logs目錄,刪除這兩個目錄里的所有文件)

  重新格式化hadoop:指令:

            hdfs namenode -format

       啟動hadoop服務即可

 

 

  

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM