1、環境介紹
涉及到軟件下載地址:https://pan.baidu.com/s/1hpcXUSJe85EsU9ara48MsQ
服務器:CentOS 6.8 其中:2 台 namenode、3 台 datanode
zookeeper集群地址:192.168.67.11:2181,192.168.67.12:2181
JDK:jdk-8u191-linux-x64.tar.gz
hadoop:hadoop-3.1.1.tar.gz
節點信息:
| 節點 | IP | namenode | datanode | resourcemanager | journalnode |
| namenode1 | 192.168.67.101 | √ | √ | √ | |
| namenode2 | 192.168.67.102 | √ | √ | √ | |
| datanode1 | 192.168.67.103 | √ | √ | ||
| datanode2 | 192.168.67.104 | √ | √ | ||
| datanode3 | 192.168.67.105 | √ | √ |
2、配置ssh免密登陸
2.1 在每台機器上執行 ssh-keygen -t rsa
2.2 vim ~/.ssh/id_rsa.pub 將所有機器上的公鑰內容匯總到 authorized_keys 文件並分發到每台機器上。
2.3 授權 chmod 600 ~/.ssh/authorized_keys
3、配置hosts:
vim /etc/hosts #增加如下配置 192.168.67.101 namenode1 192.168.67.102 namenode2 192.168.67.103 datanode1 192.168.67.104 datanode2 192.168.67.105 datanode3
#將hosts文件分發至其他機器 scp -r /etc/hosts namenode2:/etc/hosts scp -r /etc/hosts datanode1:/etc/hosts scp -r /etc/hosts datanode2:/etc/hosts scp -r /etc/hosts datanode3:/etc/hosts
4、關閉防火牆
service iptables stop
chkconfig iptables off
5、安裝JDK
tar -zxvf /usr/local/soft/jdk-8u191-linux-x64.tar.gz -C /usr/local/ vim /etc/profile #增加JDK環境變量內容 export JAVA_HOME=/usr/local/jdk1.8.0_191 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH
使環境變量生效:source /etc/profile
6、安裝hadoop
tar -zxvf /usr/local/soft/hadoop-3.1.1.tar.gz -C /usr/local/ vim /etc/profile #增加hadoop環境變量內容 export HADOOP_HOME=/usr/local/hadoop-3.1.1 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
使環境變量生效:source /etc/profile
#修改 start-dfs.sh 和 stop-dfs.sh 兩個文件,增加配置 vim /usr/local/hadoop-3.1.1/sbin/start-dfs.sh vim /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh #增加啟動用戶 HDFS_DATANODE_USER=root HDFS_DATANODE_SECURE_USER=root HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root HDFS_JOURNALNODE_USER=root HDFS_ZKFC_USER=root
#修改 start-yarn.sh 和 stop-yarn.sh 兩個文件,增加配置 vim /usr/local/hadoop-3.1.1/sbin/start-yarn.sh vim /usr/local/hadoop-3.1.1/sbin/stop-yarn.sh #增加啟動用戶 YARN_RESOURCEMANAGER_USER=root HDFS_DATANODE_SECURE_USER=root YARN_NODEMANAGER_USER=root
vim /usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh #增加內容 export JAVA_HOME=/usr/local/jdk1.8.0_191 export HADOOP_HOME=/usr/local/hadoop-3.1.1
#修改 workers 文件內容 vim /usr/local/hadoop-3.1.1/etc/hadoop/workers
#替換內容為 datanode1 datanode2 datanode3
vim /usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml #修改為如下配置 <configuration> <!-- 指定hdfs的nameservice為nameservice --> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster/</value> </property> <!-- 指定hadoop臨時目錄 --> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop-3.1.1/hdfs/temp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>192.168.67.1:2181</value> </property> </configuration>
vim /usr/local/hadoop-3.1.1/etc/hadoop/hdfs-site.xml #修改為如下配置 <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop-3.1.1/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop-3.1.1/hdfs/data</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>namenode1:9000</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>namenode2:9000</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>namenode1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>namenode2:50070</value> </property> <!--HA故障切換 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- journalnode 配置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://namenode1:8485;namenode2:8485;datanode1:8485;datanode2:8485;datanode3:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!--發生failover時,Standby的節點要執行一系列方法把原來那個Active節點中不健康的NameNode服務給殺掉, 這個叫做fence過程。sshfence會通過ssh遠程調用fuser命令去找到Active節點的NameNode服務並殺死它--> <property> <name>dfs.ha.fencing.methods</name> <value>shell(/bin/true)</value> </property> <!--SSH私鑰 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!--SSH超時時間 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--Journal Node文件存儲地址 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop-3.1.1/hdfs/journaldata</value> </property> <property> <name>dfs.qjournal.write-txns.timeout.ms</name> <value>60000</value> </property> </configuration>
vim /usr/local/hadoop-3.1.1/etc/hadoop/mapred-site.xml #修改為如下配置 <configuration> <!-- 指定mr框架為yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
vim /usr/local/hadoop-3.1.1/etc/hadoop/yarn-site.xml #修改為如下配置 <configuration> <!-- Site specific YARN configuration properties --> <!-- 開啟RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分別指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>namenode1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>namenode2</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>192.168.67.11:2181,192.168.67.12:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
#將這些修改的文件分發至其他4台服務器中 /usr/local/hadoop-3.1.1/sbin/start-dfs.sh /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh /usr/local/hadoop-3.1.1/sbin/start-yarn.sh /usr/local/hadoop-3.1.1/sbin/stop-yarn.sh /usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh /usr/local/hadoop-3.1.1/etc/hadoop/workers /usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml /usr/local/hadoop-3.1.1/etc/hadoop/hdfs-site.xml /usr/local/hadoop-3.1.1/etc/hadoop/mapred-site.xml /usr/local/hadoop-3.1.1/etc/hadoop/yarn-site.xml
首次啟動順序 1、確保配置的zookeeper服務器已經運行 2、在所有journalnode機器上啟動:hadoop-daemon.sh start journalnode 3、namenode1中執行格式化zkfc:hdfs zkfc -formatZK 4、namenode1中格式化主節點:hdfs namenode -format 5、啟動namenode1中的主節點:hadoop-daemon.sh start namenode 6、namenode2副節點同步主節點格式化:hdfs namenode -bootstrapStandby 7、啟動集群:start-all.sh
7、驗證
7.1 訪問地址:
http://192.168.67.101/50070/
http://192.168.67.102/50070/
http://192.168.67.101/8088/
http://192.168.67.102/8088/
7.2 關閉 namenode 為 active 對應的服務器,觀察另一台 namenode 狀態是否由 standby 變更為 active
