安裝Hadoop-3.1.2 集群安裝
經過多天的針對Hadoop-3.1.2的安裝,
准備安裝3台機器,1台Master,另外兩台是Slaver01和Slaver02,詳細信息
| IP | 名稱 | 作用 |
| 192.168.1.20 | master | namenode |
| 192.168.1.21 | slaver01 | datanode1 |
| 192.168.1.22 | slaver02 | datanode2 |
主要分為3個過程:
第一階段:網絡連接配置
1.修改IP地址
vi /etc/sysconfig/network-scripts/ifcfg-ens33 //ifcfg-ens33 文件,不同的系統有不同的文件名稱

修改內容如下:

2. 重啟服務
systemctl restart network

3.停止防火牆
systemctl disable firewalld.service

4.設置hostname
hostnamectl set-hostname master

5. 修改hosts文件
vi /etc/hosts

修改內容為

6.重啟
reboot
第二階段:安裝jdk1.8 Hadoop-3.1.2
1.通過xftp把jdk1.8和Hadoop-3.1.2安裝包上傳到/opt中

2. 把jdk1.8安裝到/opt中
tar -zxvf /opt/jdk-8u231-linux-x64.tar.gz -C /opt/

3.把hadoop-3.1.2安裝到/opt中
tar -zxvf /opt/hadoop-3.1.2.tar.gz -C /opt/

4. 把文件夾權限復制給root用戶
chown -R root /opt/
.
5 把jdk和hadoop的安裝路徑配置在環境變量中
vi /etc/profile

添加內容為

6 使配置生效

7. 查看jdk和hadoop是否安裝成功

8 配置hadoop配置文件中的java路徑
vi /opt/hadoop-3.1.2/etc/hadoop/hadoop-env.sh

9 配置hadoop xml
cd /opt/hadoop-3.1.2/etc/hadoop
(1) core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/hadoop/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>、
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
(2) hdfs-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop-3.1.2/dfs/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop-3.1.2/dfs/name</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
(3) mapred-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>8192</value>
</property>
</configuration>
(4) yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8035</value>
</property>
</configuration>
10. start-dfs.sh stop-dfs.sh
cd /opt/hadoop-3.1.2/sbin
添加

11 start-yarn.sh stop-yarn.sh

12.配置worker
vi /opt/hadoop-3.1.2/etc/hadoop/workers

第3階段
1.把配置好的master虛擬機克隆,並生成兩個slaver01 slaver02
2.修改slaver01 和slaver02的 IP 和hostname
SSH 配置
(1)master slaver01 slaver02 都需要運行
ssh-keygen -t rsa
(2)
1、master
# master to slave1
sudo scp ~/.ssh/id_rsa.pub root@slaver01:~/master.pub
ssh slaver01
cat ~/master.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh slaver01
exit
#master to salve2
sudo scp ~/.ssh/id_rsa.pub root@slaver02:~/master.pub
ssh slaver02
cat ~/master.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh slave2
exit
#master to master
shh master
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
exit
2、slave1
#slaver01 to master
sudo scp ~/.ssh/id_rsa.pub root@master:~/slaver01.pub
ssh master
cat ~/slaver01.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh master
exit
#slave1 to slaver02
sudo scp ~/.ssh/id_rsa.pub root@slaver02:~/slaver01.pub
ssh slaver02
cat ~/slaver01.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh slaver02
exit
#slave2 to slave2
shh slave01
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
exit
3、slave2
#slave2 to master
sudo scp ~/.ssh/id_rsa.pub root@master:~/slaver02.pub
ssh master
cat ~/slaver02.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh master
exit
# slave2 to slave1
sudo scp ~/.ssh/id_rsa.pub root@slaver01:~/slaver02.pub
ssh slaver01
cat ~/slaver02.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
exit
ssh slaver01
exit
#slave2to slave2
shh slave02
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
exit
3. 進行初始化
hdfs namenode -format
4 啟動
start-all.sh
把
