hadoop 集群子節點不啟動 spark-slave1: ssh: Could not resolve hostname spark-slave1: Name or service not known


報錯信息:

./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-master.out
spark-slave1: ssh: Could not resolve hostname spark-slave1: Name or service not known
spark-slave2: ssh: Could not resolve hostname spark-slave2: Name or service not known
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-master.out
spark-slave2: ssh: Could not resolve hostname spark-slave2: Name or service not known
spark-slave1: ssh: Could not resolve hostname spark-slave1: Name or service not known

分析:報錯信息大概意思是無法解析spark-slave1和spark-slave2主機名,我子節點的主機名明明是node1和node2,找了很久終於找到了問題所在

在slaves文件中

vim /usr/local/hadoop/etc/hadoop/slaves 

spark-slave1
spark-slave2

設置了默認的子節點主機名,改為自己的子節點即可

vim /usr/local/hadoop/etc/hadoop/slaves 

node1
node2

然后重啟hadoop

./stop-all.sh //關閉
./start-all.sh //啟動

然后發現就不報錯了,子節點啟動成功

@master:/usr/local/hadoop/sbin# jps
5698 ResourceManager
6403 Jps
5547 SecondaryNameNode
5358 NameNode
@node1:~# jps
885 Jps
744 NodeManager
681 DataNode
@node2:~# jps
914 Jps
773 NodeManager
710 DataNode

總結:hadoop下的slaves文件官方解釋為:集群里的一台機器被指定為 NameNode,另一台不同的機器被指定為JobTracker。這些機器是masters,余下的機器即作為DataNode作為TaskTracker,這些機器是slaves,在slaves文件中列出所有slave的主機名或者IP地址,一行一個。  意思就是子節點主機名或ip,當然也可以設置為172之類的ip地址。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM