hadoop 2.7 添加或刪除datanode節點


1.測試環境

ip 主機名 角色
10.124.147.22 hadoop1 namenode
10.124.147.23 hadoop2 namenode
10.124.147.32 hadoop3 resourcemanager
10.124.147.33 hadoop4 resourcemanager
10.110.92.161 hadoop5 datanode/journalnode
10.110.92.162 hadoop6 datanode
10.122.147.37 hadoop7 datanode

2.配置文件中必備參數

2.1 hdfs-site.xml參數

[hadoop@10-124-147-22 hadoop]$ grep dfs\.host -A10 /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<!-- datanode踢除主機列表文件 -->
<name>dfs.hosts.exclude</name>
<value>/usr/local/hadoop/etc/hadoop/dfs_exclude</value>
</property>

<!-- datanode添加主機列表文件-->
<property>
<name>dfs.hosts</name>
<value>/usr/local/hadoop/etc/hadoop/slaves</value>
</property>

2.2 yarn-site.xml參數

[hadoop@10-124-147-22 hadoop]$ grep exclude-path -A10 /usr/local/hadoop/etc/hadoop/yarn-site.xml
<!-- datanode踢除主機列表文件 -->
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/usr/local/hadoop/etc/hadoop/dfs_exclude</value>
</property>

<!-- datanode添加主機列表文件-->
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>/usr/local/hadoop/etc/hadoop/slaves</value>
</property>

3.踢除現有主機

1.在namenode主機中,將要踢除主機的ip添加到hdfs-site.xml配置文件dfs.hosts.exclude參數指定的文件dfs_exclude

[hadoop@10-124-147-22 hadoop]$ cat /usr/local/hadoop/etc/hadoop/dfs_exclude 
10.122.147.37

2.將其copy至hadoop其它主機

[hadoop@10-124-147-22 hadoop]$ for i in {2,3,4,5,6,7};do scp etc/hadoop/dfs_exclude hadoop$i:/usr/local/hadoop/etc/hadoop/;done

3.更新namenode信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000

4.查看namenode 狀態信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 1100228980736 (1.00 TB)
Present Capacity: 1087754866688 (1013.05 GB)
DFS Remaining: 1087752667136 (1013.05 GB)
DFS Used: 2199552 (2.10 MB)
DFS Used%: 0.00%
Under replicated blocks: 11
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 10.122.147.37:50010 (hadoop7)
Hostname: hadoop7
Decommission Status : Decommission in progress
Configured Capacity: 250831044608 (233.60 GB)
DFS Used: 733184 (716 KB)
Non DFS Used: 1235771392 (1.15 GB)
DFS Remaining: 249594540032 (232.45 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 10:25:17 CST 2018

Name: 10.110.92.161:50010 (hadoop5)
Hostname: hadoop5
Decommission Status : Normal
以下略

可以看到被踢除主機10.122.147.37的狀態變成Decommission in progress,表示集群對存放於該節點的副本正在進行轉移。當其變成Decommissioned時,即代表已經結束,相當於已經踢除集群。

同時此狀態可以在hdfs的web頁面查看

5.更新resourcemananger信息

[hadoop@10-124-147-32 hadoop]$ yarn rmadmin -refreshNodes

更新之后,可以在resourcemanager的web頁面查看到Active Nodes 的信息

或者使用命令查看

[hadoop@10-124-147-32 hadoop]$ yarn node -list
Total Nodes:2
         Node-Id	     Node-State	Node-Http-Address	Number-of-Running-Containers
   hadoop5:37438	        RUNNING	     hadoop5:8042	                           0
    hadoop6:9001	        RUNNING	     hadoop6:8042	                           0

4.添加新主機至集群

1.將原hadoop配置文件copy新主機,並安裝好java環境
2.在namenode中將新主機的ip添加於dfs.hosts參數指定的文件中

[hadoop@10-124-147-22 hadoop]$ cat /usr/local/hadoop/etc/hadoop/slaves 
hadoop5
hadoop6
10.122.147.37

3.將該slaves文件同步到其它主機之上

[hadoop@10-124-147-22 hadoop]$ for i in {2,3,4,5,6,7};do scp etc/hadoop/slaves hadoop$i:/usr/local/hadoop/etc/hadoop/;done

4.啟動新主機的datanode進程和nodemanager進程

[hadoop@10-122-147-37 hadoop]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /letv/hadoop-2.7.6/logs/hadoop-hadoop-datanode-10-122-147-37.out
[hadoop@10-122-147-37 hadoop]$ jps
3068 DataNode
6143 Jps
[hadoop@10-122-147-37 hadoop]$ sbin/yarn-daemon.sh start nodemanager
starting nodemanager, logging to /letv/hadoop-2.7.6/logs/yarn-hadoop-nodemanager-10-122-147-37.out
[hadoop@10-122-147-37 hadoop]$ jps
6211 NodeManager
6403 Jps
3068 DataNode

5.刷新namenode

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000

6.查看hdfs信息

[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful for hadoop1/10.124.147.22:9000
Refresh nodes successful for hadoop2/10.124.147.23:9000
[hadoop@10-124-147-22 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 1351059292160 (1.23 TB)
Present Capacity: 1337331367936 (1.22 TB)
DFS Remaining: 1337329156096 (1.22 TB)
DFS Used: 2211840 (2.11 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 10.122.147.37:50010 (hadoop7)
Hostname: hadoop7
Decommission Status : Normal
Configured Capacity: 250831044608 (233.60 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 1240752128 (1.16 GB)
DFS Remaining: 249589555200 (232.45 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:09 CST 2018


Name: 10.110.92.161:50010 (hadoop5)
Hostname: hadoop5
Decommission Status : Normal
Configured Capacity: 550114123776 (512.33 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 11195953152 (10.43 GB)
DFS Remaining: 538917433344 (501.91 GB)
DFS Used%: 0.00%
DFS Remaining%: 97.96%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:10 CST 2018


Name: 10.110.92.162:50010 (hadoop6)
Hostname: hadoop6
Decommission Status : Normal
Configured Capacity: 550114123776 (512.33 GB)
DFS Used: 737280 (720 KB)
Non DFS Used: 1291218944 (1.20 GB)
DFS Remaining: 548822167552 (511.13 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.77%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 24 17:15:10 CST 2018

7.更新resourcemanager信息

[hadoop@10-124-147-32 hadoop]$ yarn rmadmin -refreshNodes
[hadoop@10-124-147-32 hadoop]$ yarn node -list
18/07/24 18:11:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
Total Nodes:3
         Node-Id	     Node-State	Node-Http-Address	Number-of-Running-Containers
    hadoop7:3296	        RUNNING	     hadoop7:8042	
    hadoop5:37438	        RUNNING	     hadoop5:8042	                           0
    hadoop6:9001	        RUNNING	     hadoop6:8042	                           0

8.include與exclude對yarn和hdfs的影響

判斷一個nodemanager能否連接到resourcemanager的條件是,該nodemanager出現在include文件且不出現exclude文件中

而hdfs規與yarn不太一樣(hdfs中的include直接即為dfs.hosts),其規則如下表

是否在include 是否在exclude 是否可連接
無法連接
無法連接
可以連接
可連接,即將解除

如果未指定include或者include為空,即意味着所有節點都在include文件

5.遇到異常

在移除datanode中的,會遇到被移除datanode一直處於Decommission in progress狀態,這是因為默認測試環境中,沒有設置副本數量,在hadoop中的默認副本數為3,而本測試環境中,因為datanode總共只有3個節點,所以會出現該異常

將副本數量設置成小於datanode數量即可

[hadoop@10-124-147-22 hadoop]$ grep dfs\.replication -C3 /usr/local/hadoop/etc/hadoop/hdfs-site.xml

<!-- 副本復制數量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM