Hadoop HA高可用集群搭建(2.7.2)


1.集群規划:

 主機名        IP                安裝的軟件                            執行的進程
drguo1  192.168.80.149 jdk、hadoop                         NameNode、DFSZKFailoverController(zkfc)、ResourceManager
drguo2 192.168.80.150  jdk、hadoop                         NameNode、DFSZKFailoverController(zkfc)、ResourceManager
drguo3  192.168.80.151  jdk、hadoop、zookeeper     DataNode、NodeManager、JournalNode、QuorumPeerMain
drguo4  192.168.80.152  jdk、hadoop、zookeeper     DataNode、NodeManager、JournalNode、QuorumPeerMain

drguo5 192.168.80.153  jdk、hadoop、zookeeper     DataNode、NodeManager、JournalNode、QuorumPeerMain

排的好好的,顯示出來就亂了!!

2.前期准備:

准備五台機器,改動靜態IP、主機名、主機名與IP的映射。關閉防火牆。安裝JDK並配置環境變量(不會請看這http://blog.csdn.net/dr_guo/article/details/50886667),創建用戶:用戶組,SSH免password登錄SSH免password登錄(報錯請看這http://blog.csdn.net/dr_guo/article/details/50967442)。


注意:要把127.0.1.1那一行凝視掉。要不然會出現jps顯示有datanode,但網頁顯示live nodes為0。


凝視之后就正常了。好像有人沒凝視也正常,我也不知道為什么0.0


3.搭建zookeeper集群(drguo3/drguo4/drguo5)

見:ZooKeeper全然分布式集群搭建

4.正式開始搭建Hadoop HA集群

去官網下最新的Hadoop(http://apache.opencas.org/hadoop/common/stable/),眼下最新的是2.7.2,下載完之后把它放到/opt/Hadoop下

[plain]  view plain  copy
  1. guo@guo:~/下載$ mv ./hadoop-2.7.2.tar.gz /opt/Hadoop/  
  2. mv: 無法創建普通文件"/opt/Hadoop/hadoop-2.7.2.tar.gz": 權限不夠  
  3. guo@guo:~/下載$ su root   
  4. password:   
  5. root@guo:/home/guo/下載# mv ./hadoop-2.7.2.tar.gz /opt/Hadoop/  
解壓
[plain]  view plain  copy
  1. guo@guo:/opt/Hadoop$ sudo tar -zxf hadoop-2.7.2.tar.gz   
  2. [sudo] guo 的password:  
解壓jdk的時候我用的是tar -zxvf。當中的v呢就是看一下解壓的過程。不想看你能夠不寫。

改動opt文件夾全部者(用戶:用戶組)直接把opt文件夾的全部者/組換成了guo。詳細情況在ZooKeeper全然分布式集群搭建說過。

[plain]  view plain  copy
  1. root@guo:/opt/Hadoop# chown -R guo:guo /opt  

環境變量設置

[plain]  view plain  copy
  1. guo@guo:/opt/Hadoop$ sudo gedit /etc/profile  
在最后加上(這樣設置在運行bin/sbin文件夾下的腳本時就不用進入該文件夾用了)
[plain]  view plain  copy
  1. #hadoop  
  2. export HADOOP_HOME=/opt/Hadoop/hadoop-2.7.2  
  3. export PATH=$PATH:$HADOOP_HOME/sbin  
  4. export PATH=$PATH:$HADOOP_HOME/bin  
然后更新配置
[plain]  view plain  copy
  1. guo@guo:/opt/Hadoop$ source /etc/profile  

改動/opt/Hadoop/hadoop-2.7.2/etc/hadoop下的hadoop-env.sh

[plain]  view plain  copy
  1. guo@guo:/opt/Hadoop$ cd hadoop-2.7.2  
  2. guo@guo:/opt/Hadoop/hadoop-2.7.2$ cd etc/hadoop/  
  3. guo@guo:/opt/Hadoop/hadoop-2.7.2/etc/hadoop$ sudo gedit ./hadoop-env.sh  
進入文件后
[plain]  view plain  copy
  1. export JAVA_HOME=${JAVA_HOME}#將這個改成JDK路徑,例如以下  
  2. export JAVA_HOME=/opt/Java/jdk1.8.0_73  
然后更新文件配置
[plain]  view plain  copy
  1. guo@guo:/opt/Hadoop/hadoop-2.7.2/etc/hadoop$ source ./hadoop-env.sh 
前面配置和單機模式一樣。我就直接復制了。

注意:漢語凝視是給你看的。復制粘貼的時候都刪了。!!
改動core-site.xml
<configuration>
<!-- 指定hdfs的nameservice為ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property>
<!-- 指定hadoop暫時文件夾 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/Hadoop/hadoop-2.7.2/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>drguo3:2181,drguo4:2181,drguo5:2181</value>
</property>
</configuration>
改動hdfs-site.xml
<configuration>
	<!--指定hdfs的nameservice為ns1,須要和core-site.xml中的保持一致 -->
	<property>
		<name>dfs.nameservices</name>
		<value>ns1</value>
	</property>
	<!-- ns1以下有兩個NameNode,各自是nn1,nn2 -->
	<property>
		<name>dfs.ha.namenodes.ns1</name>
		<value>nn1,nn2</value>
	</property>
	<!-- nn1的RPC通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.ns1.nn1</name>
		<value>drguo1:9000</value>
	</property>
	<!-- nn1的http通信地址 -->
	<property>
		<name>dfs.namenode.http-address.ns1.nn1</name>
		<value>drguo1:50070</value>
	</property>
	<!-- nn2的RPC通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.ns1.nn2</name>
		<value>drguo2:9000</value>
	</property>
	<!-- nn2的http通信地址 -->
	<property>
		<name>dfs.namenode.http-address.ns1.nn2</name>
		<value>drguo2:50070</value>
	</property>
	<!-- 指定NameNode的元數據在JournalNode上的存放位置 -->
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://drguo3:8485;drguo4:8485;drguo5:8485/ns1</value>
	</property>
	<!-- 指定JournalNode在本地磁盤存放數據的位置 -->
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/opt/Hadoop/hadoop-2.7.2/journaldata</value>
	</property>
	<!-- 開啟NameNode失敗自己主動切換 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!-- 配置失敗自己主動切換實現方式 -->
	<property>
		<name>dfs.client.failover.proxy.provider.ns1</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!-- 配置隔離機制方法。多個機制用換行切割,即每一個機制暫用一行-->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>
			sshfence
			shell(/bin/true)
		</value>
	</property>
	<!-- 使用sshfence隔離機制時須要ssh免登陸 -->
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/guo/.ssh/id_rsa</value>
	</property>
	<!-- 配置sshfence隔離機制超時時間 -->
	<property>
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>
</configuration>
先將mapred-site.xml.template改名為mapred-site.xml然后改動mapred-site.xml
<configuration>
	<!-- 指定mr框架為yarn方式 -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
</configuration>
改動yarn-site.xml
<configuration>
	<!-- 開啟RM高可用 -->
	<property>
	   <name>yarn.resourcemanager.ha.enabled</name>
	   <value>true</value>
	</property>
	<!-- 指定RM的cluster id -->
	<property>
	   <name>yarn.resourcemanager.cluster-id</name>
	   <value>yrc</value>
	</property>
	<!-- 指定RM的名字 -->
	<property>
	   <name>yarn.resourcemanager.ha.rm-ids</name>
	   <value>rm1,rm2</value>
	</property>
	<!-- 分別指定RM的地址 -->
	<property>
	   <name>yarn.resourcemanager.hostname.rm1</name>
	   <value>drguo1</value>
	</property>
	<property>
	   <name>yarn.resourcemanager.hostname.rm2</name>
	   <value>drguo2</value>
	</property>
	<!-- 指定zk集群地址 -->
	<property>
	   <name>yarn.resourcemanager.zk-address</name>
	   <value>drguo3:2181,drguo4:2181,drguo5:2181</value>
	</property>
	<property>
	   <name>yarn.nodemanager.aux-services</name>
	   <value>mapreduce_shuffle</value>
	</property>
</configuration>
改動slaves
drguo3
drguo4
drguo5
把Hadoop整個文件夾復制到drguo2/3/4/5,拷之前把share下doc刪了(文檔不用拷)。這樣會快點。
5.啟動zookeeper集群(分別在drguo3、drguo4、drguo5上啟動zookeeper)
guo@drguo3:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo3:~$ jps
2005 Jps
1994 QuorumPeerMain
guo@drguo3:~$ ssh drguo4
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Fri Mar 25 14:04:43 2016 from 192.168.80.151
guo@drguo4:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo4:~$ jps
1977 Jps
1966 QuorumPeerMain
guo@drguo4:~$ exit
注銷
Connection to drguo4 closed.
guo@drguo3:~$ ssh drguo5
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Fri Mar 25 14:04:56 2016 from 192.168.80.151
guo@drguo5:~$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
guo@drguo5:~$ jps
2041 Jps
2030 QuorumPeerMain
guo@drguo5:~$ exit
注銷
Connection to drguo5 closed.
guo@drguo3:~$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader
6.啟動journalnode(分別在drguo3、drguo4、drguo5上啟動journalnode)注意僅僅有第一次須要這么啟動,之后啟動hdfs會包括journalnode。
guo@drguo3:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo3.out
guo@drguo3:~$ jps
2052 Jps
2020 JournalNode
1963 QuorumPeerMain
guo@drguo3:~$ ssh drguo4
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Fri Mar 25 00:09:08 2016 from 192.168.80.149
guo@drguo4:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo4.out
guo@drguo4:~$ jps
2103 Jps
2071 JournalNode
1928 QuorumPeerMain
guo@drguo4:~$ exit
注銷
Connection to drguo4 closed.
guo@drguo3:~$ ssh drguo5
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Thu Mar 24 23:52:17 2016 from 192.168.80.152
guo@drguo5:~$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo5.out
guo@drguo5:~$ jps
2276 JournalNode
2308 Jps
1959 QuorumPeerMain
guo@drguo5:~$ exit
注銷
Connection to drguo5 closed.
在drguo4/5啟動時發現了問題,沒有journalnode,查看日志發現是由於漢語凝視造成的,drguo4/5全刪了問題解決。drguo4/5的拼音輸入法也不能用,我非常蛋疼。。

鏡像都是復制的。咋還變異了呢。
7.格式化HDFS(在drguo1上運行)

guo@drguo1:/opt$ hdfs namenode -format
這回又出問題了,還是漢語凝視鬧得,drguo1/2/3也全刪了,問題解決。


注意:格式化之后須要把tmp文件夾拷給drguo2(不然drguo2的namenode起不來)
guo@drguo1:/opt/Hadoop/hadoop-2.7.2$ scp -r tmp/ drguo2:/opt/Hadoop/hadoop-2.7.2/
8.格式化ZKFC(在drguo1上運行)
guo@drguo1:/opt$ hdfs zkfc -formatZK
9.啟動HDFS(在drguo1上運行)
guo@drguo1:/opt$ start-dfs.sh 
10.啟動YARN(在drguo1上運行)
guo@drguo1:/opt$ start-yarn.sh 
PS:
1.drguo2的resourcemanager須要手動單獨啟動:
yarn-daemon.sh start resourcemanager
2.namenode、datanode也能夠單獨啟動:
hadoop-daemon.sh start namenode
hadoop-daemon.sh start  datanode
3.NN 由standby轉化成active
hdfs haadmin -transitionToActive nn1 --forcemanual
大功告成!!


是不是和之前規划的一樣0.0


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM