Zookeeper+Hadoop完全分布式HA部署


1.   環境准備

1.1. 服務器規划

名稱

IP

hadoop1

192.168.10.51/16

hadoop2

192.168.10.52/16

hadoop3

192.168.10.53/16

1.2. 節點部署規划

Hadoop1

Hadoop2

Hadoop3

備注

NameNode

NameNode

 

 

resourcemanager

resourcemanager

 

yarn-site.xml,這里配哪一台,哪一台啟動ResourceManager

 

NodeManager

NodeManager

DataNode、NodeManager決定於:
slaves文件。(默認localhost,刪掉即可)
誰跑dataNode,slaves文件寫誰。

namenode跑的時候,會通過配置文件開始掃描slaves文件,slaves文件有誰,誰啟動dataNode.
當啟動yarn時,會通過掃描配置文件開始掃描slaves文件,slaves文件有誰,誰啟動NodeManager

 

ZKFC

ZKFC

 

 

 

DataNode

DataNode

 

journalnode

journalnode

journalnode

 

zookeeper

zookeeper

zookeeper

 

 

 

1.3. 目錄規划

序號

目錄

路徑

1

所有軟件目錄

/home/hadoop/app/

2

所有數據和日志目錄

/home/hadoop/data/

 

 

2.   集群基礎環境准備

2.1. 修改主機名(群集)

[root@localhost ~]# for i in `seq 1 3`;do echo HOSTNAME=hadoop$i > 192.168.10.5$:/etc/sysconfig/network;done

[root@localhost .ssh]# for i in `seq 2 3`;do sshpass -p zq@123 ssh -o StrictHostKeychecking=no root@192.168.10.5$i reboot;done

2.2. 修改hosts(群集)

 

[root@localhost ~]# for i in `seq 1 3 `;do echo 192.168.10.5$i  hadoop$i >> /etc/hosts;done

[root@localhost ~]# for i in `seq 2 3`;do scp /etc/hosts 192.168.10.5$i:/etc/hosts;done

2.3. 用戶創建(單機)

[root@localhost ~]# useradd -m hadoop -s /bin/bash
[root@localhost ~]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
 

2.4. hadoop用戶創建(群集)

[root@hadoop1 ~]#for i in `seq 2 3`;do ssh hadoop$i useradd -m hadoop -s /bin/bash;done

2.5. hadoop用戶修改密碼(群集)

[root@hadoop1 ~]# for i in `seq 2 3`;do ssh hadoop$i  'echo hadoop|passwd --stdin hadoop';done

2.6. 為hadoop增加管理員權限

[root@localhost ~]# visudo  ###98 增加一行內容如下

 

## Allow root to run any commands anywhere

root    ALL=(ALL)       ALL

hadoop  ALL=(ALL)       ALL

 

2.7. root用戶配置ssh免密登陸

2.7.1.    hadoop1-hadoop3機器

[root@localhost ~]# ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

7f:45:11:68:e0:5d:27:0b:37:b1:db:b7:2d:e0:7e:ef root@localhost.localdomain

The key's randomart image is:

+--[ RSA 2048]----+

|          ....Oo.|

|         . .o+ B |

|          ... +  |

|             . o |

|        S   . o o|

|         . . o  +|

|          . o ...|

|           o  .. |

|            .. oE|

+-----------------+

2.7.2.    hadoop1機器

cat id_rsa.pub >> authorized_keys

 

2.7.3.    hadoop2-3機器

[root@hadoop2 .ssh]# ssh-copy-id -i hadoop1

[root@hadoop3 .ssh]# ssh-copy-id -i hadoop1

 

2.7.4.    hadoop1機器

[root@hadoop1 .ssh]# chmod 600 authorized_keys

[root@hadoop1 .ssh]# for i in `seq 2 3`;do scp authorized_keys 192.168.10.5$i:/root/.ssh/authorized_keys ;done

 

2.8. hadoop用戶配置ssh免密登陸

2.8.1.    hadoop1-hadoop3機器

[hadoop@hadoop1 .ssh]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

b2:72:3a:58:a5:22:a6:f7:cd:44:2f:9f:1e:d3:07:d2 hadoop@hadoop1

The key's randomart image is:

+--[ RSA 2048]----+

|                 |

|                 |

|                 |

|      .  .       |

|     oo S E      |

|... o. + o .     |

|o. +. = + . .    |

|. o .B o + .     |

| . .o.o.+        |

+-----------------+

2.8.2.    hadoop1機器

[hadoop@hadoop1 .ssh]cat .ssh/id_rsa.pub >> authorized_keys

 

2.8.3.    hadoop2-3機器

[hadoop@hadoop2 .ssh]ssh-copy-id -i hadoop1

[hadoop@hadoop3 .ssh]ssh-copy-id -i hadoop1

 

2.8.4.    hadoop1機器

[hadoop@hadoop1 .ssh]chmod 600 authorized_keys

[hadoop@hadoop2 .ssh]for i in `seq 2 3`;do scp authorized_keys hadoop$i:/home/hadoop/.ssh/authorized_keys

 

注意權限問題:

Chmod 600 .ssh/authorized_keys

Chmod 700 .ssh

2.9. java環境配置

2.9.1.    hadoop1機器

[root@hadoop1 soft]# ./javainstall.sh

[root@hadoop1 soft]# for i in `seq 2 3`;do scp /soft/jdk-* hadoop$i:/soft/ ;done

[root@hadoop1 soft]# for i in `seq 2 3`;do scp /soft/java* hadoop$i:/soft/ ;done

[root@hadoop1 soft]# for i in `seq 2 3`;do ssh hadoop$i sh /soft/javainstall.sh ;done

[root@hadoop1 soft]#for i in `seq 2 3`;do ssh hadoop$i ‘source /etc/profile’ ;done

 

[root@hadoop1 local]# for i in `seq 2 3`;do ssh hadoop$i 'export BASH_ENV=/etc/profile' ;done

[root@hadoop1 local]# for i in `seq 2 3`;do ssh hadoop$i java -version;done

3.   zookeeper安裝

3.1. 創建安裝目錄

[hadoop@hadoop1 ~]$ for i in `seq 1 3`;do ssh hadoop$i 'mkdir -p /home/hadoop/app';done

[hadoop@hadoop1 ~]$ for i in `seq 1 3`;do ssh hadoop$i 'mkdir -p /home/hadoop/data';done

 

 

3.2. Hadoop1機器配置

 

 

[hadoop@hadoop1 ~]$tar -zxvf /soft/zookeeper-3.4.10.tar.gz -C /home/hadoop/app/zookeeper

[hadoop@hadoop1 ~]$cd /home/hadoop/app/zookeeper/conf/

[hadoop@hadoop1 ~]$mv zoo_sample.cfg zoo.cfg

[hadoop@hadoop1 ~]$cat zoo.cfg

WARNING! The remote SSH server rejected X11 forwarding request.

Last login: Tue Mar 26 01:47:38 2019 from 192.168.0.71

[hadoop@hadoop1 ~]$ cd /home/hadoop/app/zookeeper/conf/

[hadoop@hadoop1 conf]$ cat zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/home/hadoop/app/zookeeper/zkdata

dataLogDir=/home/hadoop/app/zookeeper/zkdatalog

# the port at which the clients will connect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

#server.服務編號=主機名稱:Zookeeper不同節點之間同步和通信的端口:選舉端口(選舉leader)

server.1=hadoop1:2888:3888

server.2=hadoop2:2888:3888

server.3=hadoop3:2888:3888

 

3.3. 將Zookeeper安裝目錄拷貝到其他節點上面

[hadoop@hadoop1 ~]$for i in `seq 2 3`;do scp -r zookeeper hadoop$i:/home/hadoop/app/;done

3.4. 在所有節點創建目錄

[hadoop@hadoop1 ~]$for i in `seq 2 3`;do ssh hadoop$i 'mkdir -p /home/hadoop/app/zookeeper/zkdata';done

[hadoop@hadoop1 ~]$ for i in `seq 2 3`;do ssh hadoop$i 'mkdir -p /home/hadoop/app/zookeeper/zkdatalog';done

[hadoop@hadoop1 ~]$for i in `seq 2 3`;do ssh hadoop$i 'touch /home/hadoop/app/zookeeper/zkdata/myid';done

3.5. 在所有節點修改myid, 然后分別在hadoop1\hadoop2\hadoop3上面,進入zkdata目錄下,創建文件myid,里面的內容分別填充為:1、2、3、

[hadoop@hadoop1 ~]$cd /home/hadoop/app/zookeeper/zkdata

[hadoop@hadoop1 ~]echo 1 > myid

 

3.6. 在所有節點配置Zookeeper環境變量

[hadoop@hadoop1]cat /etc/profile

export JAVA_HOME=/usr/local/java/jdk1.8.0_181

export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

export  JRE_HOME=${JAVA_HOME}/jre

export  CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export PATH=${JAVA_HOME}/bin:$ZOOKEEPER_HOME/bin:$PATH

3.7. 在所有節點啟動zookeeper

[hadoop@hadoop1 ~]$ cd /home/hadoop/app/zookeeper/bin/

[hadoop@hadoop1 bin]$ pwd

/home/hadoop/app/zookeeper/bin

[hadoop@hadoop1 bin]$ ll

total 172

-rwxr-xr-x 1 hadoop hadoop    232 Mar 23  2017 README.txt

-rwxr-xr-x 1 hadoop hadoop   1937 Mar 23  2017 zkCleanup.sh

-rwxr-xr-x 1 hadoop hadoop   1056 Mar 23  2017 zkCli.cmd

-rwxr-xr-x 1 hadoop hadoop   1534 Mar 23  2017 zkCli.sh

-rwxr-xr-x 1 hadoop hadoop   1628 Mar 23  2017 zkEnv.cmd

-rwxr-xr-x 1 hadoop hadoop   2696 Mar 23  2017 zkEnv.sh

-rwxr-xr-x 1 hadoop hadoop   1089 Mar 23  2017 zkServer.cmd

-rwxr-xr-x 1 hadoop hadoop   6773 Mar 23  2017 zkServer.sh

-rw-rw-r-- 1 hadoop hadoop 138648 Mar 26 02:57 zookeeper.out

[hadoop@hadoop1 bin]$ ./zkServer.sh start

[hadoop@hadoop1 bin]$ ./zkServer.sh status     ###查看狀態

 

3.8. 查看狀態報錯解決

[hadoop@hadoop1 bin]$ ./zkServer.sh status

ZooKeeper JMX enabled by default

Usingconfig: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Error contacting service. It is probably not running.

以上錯誤可以查看日志 zookeeper.out(哪個目錄執行啟動命令則.out就生成在哪個目錄下)

通常是由於防火牆沒有關閉導致

root用戶登陸查看防火牆狀態並關閉防火牆

[root@hadoop1 ~]# systemctl status firewalld.service

 firewalld.service - firewalld - dynamic firewall daemon

   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)

   Active: active (running) since Mon 2019-03-25 02:13:01 EDT; 24h ago

 Main PID: 727 (firewalld)

   CGroup: /system.slice/firewalld.service

           └─727 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

 

Mar 25 02:12:59 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...

Mar 25 02:13:01 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.

[root@hadoop1 ~]#

[root@hadoop1 ~]# systemctl stop firewalld.service

[root@hadoop1 ~]# systemctl disable firewalld.service

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

3.9. 驗證狀態

[hadoop@hadoop1 bin]$ ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: follower

 

[hadoop@hadoop2 ~]$ zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: leader

 

[hadoop@hadoop3 ~]$ zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: follower

 

4.   hadoop安裝

4.1. 解壓安裝程序-hadoop1

[hadoop@hadoop1 soft]$ tar -zxvf hadoop-2.6.4.tar.gz -C /home/hadoop/app/hadoop

 

4.2. 解壓安裝程序-hadoop1-修改配置文件

進入目錄/home/hadoop/app/hadoop/etc/hadoop

 [hadoop@hadoop1 hadoop]$ pwd

/home/hadoop/app/hadoop/etc/hadoop

[hadoop@hadoop1 hadoop]$ ll

total 152

-rw-r--r-- 1 hadoop hadoop  4436 Mar  7  2016 capacity-scheduler.xml

-rw-r--r-- 1 hadoop hadoop  1335 Mar  7  2016 configuration.xsl

-rw-r--r-- 1 hadoop hadoop   318 Mar  7  2016 container-executor.cfg

-rw-r--r-- 1 hadoop hadoop   774 Mar  7  2016 core-site.xml

-rw-r--r-- 1 hadoop hadoop  3670 Mar  7  2016 hadoop-env.cmd

-rw-r--r-- 1 hadoop hadoop  4224 Mar  7  2016 hadoop-env.sh

-rw-r--r-- 1 hadoop hadoop  2598 Mar  7  2016 hadoop-metrics2.properties

-rw-r--r-- 1 hadoop hadoop  2490 Mar  7  2016 hadoop-metrics.properties

-rw-r--r-- 1 hadoop hadoop  9683 Mar  7  2016 hadoop-policy.xml

-rw-r--r-- 1 hadoop hadoop   775 Mar  7  2016 hdfs-site.xml

-rw-r--r-- 1 hadoop hadoop  1449 Mar  7  2016 httpfs-env.sh

-rw-r--r-- 1 hadoop hadoop  1657 Mar  7  2016 httpfs-log4j.properties

-rw-r--r-- 1 hadoop hadoop    21 Mar  7  2016 httpfs-signature.secret

-rw-r--r-- 1 hadoop hadoop   620 Mar  7  2016 httpfs-site.xml

-rw-r--r-- 1 hadoop hadoop  3523 Mar  7  2016 kms-acls.xml

-rw-r--r-- 1 hadoop hadoop  1325 Mar  7  2016 kms-env.sh

-rw-r--r-- 1 hadoop hadoop  1631 Mar  7  2016 kms-log4j.properties

-rw-r--r-- 1 hadoop hadoop  5511 Mar  7  2016 kms-site.xml

-rw-r--r-- 1 hadoop hadoop 11291 Mar  7  2016 log4j.properties

-rw-r--r-- 1 hadoop hadoop   938 Mar  7  2016 mapred-env.cmd

-rw-r--r-- 1 hadoop hadoop  1383 Mar  7  2016 mapred-env.sh

-rw-r--r-- 1 hadoop hadoop  4113 Mar  7  2016 mapred-queues.xml.template

-rw-r--r-- 1 hadoop hadoop   758 Mar  7  2016 mapred-site.xml.template

-rw-r--r-- 1 hadoop hadoop    10 Mar  7  2016 slaves

-rw-r--r-- 1 hadoop hadoop  2316 Mar  7  2016 ssl-client.xml.example

-rw-r--r-- 1 hadoop hadoop  2268 Mar  7  2016 ssl-server.xml.example

-rw-r--r-- 1 hadoop hadoop  2237 Mar  7  2016 yarn-env.cmd

-rw-r--r-- 1 hadoop hadoop  4567 Mar  7  2016 yarn-env.sh

-rw-r--r-- 1 hadoop hadoop   690 Mar  7  2016 yarn-site.xml

4.2.1.    修改HDFS配置文件

4.2.1.1.  配置hadoop-env.sh

主要配置JAVA_HOME可以不配置,在hadoop-env.sh和mapred-env.sh還有yarn-env.sh中寫上你的jdk路徑(有可能這條屬性被注釋掉了,記得解開,把前面的#去掉就可以了)

4.2.1.2.  配置core-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

 

    http://www.apache.org/licenses/LICENSE-2.0

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

                  <name>fs.defaultFS</name>

<!--把兩個NameNode的地址組裝成一個集群cluster1 -->

-->

                  <value>hdfs://cluster1</value>

         </property>

         < 這里的值指的是默認的HDFS路徑 ,在哪一台配,namenode就在哪一台啟動>         <property>

                  <name>hadoop.tmp.dir</name>

                  <value>/home/hadoop/data/tmp</value>

         </property>

         < hadoop的臨時目錄,如果需要配置多個目錄,需要逗號隔開,data目錄需要我們自己創建>

 <!-指定ZKFC故障自動切換轉移--->      

 <property>

                  <name>ha.zookeeper.quorum</name>

                  <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>

    </property>

    < 配置Zookeeper 管理HDFS>

</configuration>

~                                                                                                                                                                                             

~                       

4.2.1.3.  配置hdfs-site.xml

[hadoop@hadoop1 hadoop]$ cat hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

 

    http://www.apache.org/licenses/LICENSE-2.0

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

         <name>dfs.replication</name>

          <value>3</value>

</property>

         < 數據塊副本數為3>

<property>

                  <name>dfs.permissions.enabled</name>

                  <value>false</value>

</property>

         < 權限默認配置為false>

 

<property>

                  <name>dfs.nameservices</name>

                  <value>cluster1</value>

</property>

         < 命名空間,它的值與fs.defaultFS的值要對應,namenode高可用之后有兩個namenode,cluster1是對外提供的統一入口>

 

<property>

                  <name>dfs.ha.namenodes.cluster1</name>

                  <value>hadoop1,hadoop2</value>

</property>

         < 指定 nameService 是 cluster1 時的nameNode有哪些,這里的值也是邏輯名稱,名字隨便起,相互不重復即可>

<!-- hadoop1的RPC通信地址,hadoop1所在地址 -->

<property>

                  <name>dfs.namenode.rpc-address.cluster1.hadoop1</name>

                  <value>hadoop1:9000</value>

</property>

<!-- hadoop1的http通信地址,外部訪問地址 -->

<property>

                  <name>dfs.namenode.http-address.cluster1.hadoop1</name>

                  <value>hadoop1:50070</value>

</property>

<!-- hadoop2的RPC通信地址,hadoop2所在地址 -->

<property>

                  <name>dfs.namenode.rpc-address.cluster1.hadoop2</name>

                  <value>hadoop2:9000</value>

</property>

<!-- hadoop2的http通信地址,外部訪問地址 -->

<property>

                  <name>dfs.namenode.http-address.cluster1.hadoop2</name>

                  <value>hadoop2:50070</value>

</property>

<!-- 指定NameNode的元數據在JournalNode日志上的存放位置(一般和zookeeper部署在一起) -->

<property>

                  <name>dfs.namenode.shared.edits.dir</name>

                 <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/cluster1</value>

</property>

<!-- 指定JournalNode集群在對nameNode的目錄進行共享時,JournalNode自己在本地磁盤存放數據的位置 -->

<property>

                  <name>dfs.journalnode.edits.dir</name>

                  <value>/home/hadoop/data/journaldata</value>

</property>

<!--客戶端通過代理訪問namenode,訪問文件系統,HDFS 客戶端與Active 節點通信的Java 類,使用其確定Active 節點是否活躍,指定 cluster1 出故障時,哪個實現類負責執行故障切換  -->

<property>

                 <name>dfs.client.failover.proxy.provider.cluster1</name>

                 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<!--這是配置自動切換的方法,有多種使用方法,具體可以看官網,這個參數的值可以有多種,可以是shell(/bin/true),也可以是 <value>sshfence</value> 的遠程登錄殺死的方法,這個腳本do nothing 返回0 -->

<property>

                  <name>dfs.ha.fencing.methods</name>

                  <value>shell(/bin/true)</value>

</property>

<!-- 這個是使用sshfence隔離機制時才需要配置ssh免登陸 --> 

<property>

        <name>dfs.ha.fencing.ssh.private-key-files</name>

        <value>/home/hadoop/.ssh/id_rsa</value>

</property>

<!-- 配置sshfence隔離機制超時時間,這個屬性同上,如果你是用腳本的方法切換,這個應該是可以不配置的 -->

<property>

        <name>dfs.ha.fencing.ssh.connect-timeout</name>

        <value>10000</value>

</property>

< 腦裂默認配置>

<!-- 這個是開啟自動故障轉移,如果你沒有自動故障轉移,這個可以先不配,可以先注釋掉 --> 

<property>

                  <name>dfs.ha.automatic-failover.enabled</name>

                  <value>true</value>

</property>

</configuration>

4.2.1.4.  配置slave

DataNode、NodeManager決定於:slaves文件。(默認localhost,刪掉即可)

誰跑dataNode,slaves文件寫誰。當namenode跑的時候,會通過配置文件開始掃描slaves文件,slaves文件有誰,誰啟動dataNode.當啟動yarn時,會通過掃描配置文件開始掃描slaves文件,slaves文件有誰,誰啟動NodeManager

[hadoop@hadoop1 hadoop]$ cat slaves

hadoop2

hadoop3

 

4.2.2.    修改yarn配置文件

4.2.2.1.  配置mapred-site.xml

[hadoop@hadoop1 hadoop]$  mv mapred-site.xml.template mapred-site.xml

[hadoop@hadoop1 hadoop]$ cat mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

 

    http://www.apache.org/licenses/LICENSE-2.0

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<!-- 指定mr運行在yarn上 -->

    <property>

     <name>mapreduce.framework.name</name>

     <value>yarn</value>

    </property>

</configuration>

 

4.2.2.2.  配置yarn-site.xml

[hadoop@hadoop1 hadoop]$ cat yarn-site.xml

<?xml version="1.0"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

 

    http://www.apache.org/licenses/LICENSE-2.0

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

<configuration>

 

<!-- Site specific YARN configuration properties -->

<!-- reducer獲取數據的方式 -->

 <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

    <!--啟用resourcemanager ha-->

    <property>

        <name>yarn.resourcemanager.ha.enabled</name>

        <value>true</value>

    </property>

    <!--聲明兩台resourcemanager的地址-->

    <property>

        <name>yarn.resourcemanager.cluster-id</name>

        <value>rmCluster</value>

    </property>

    <property>

        <name>yarn.resourcemanager.ha.rm-ids</name>

        <value>rm1,rm2</value>

    </property>

    <property>

        <name>yarn.resourcemanager.hostname.rm1</name>

        <value>hadoop1</value>

    </property>

    <property>

        <name>yarn.resourcemanager.hostname.rm2</name>

        <value>hadoop2</value>

    </property>

    <!--指定zookeeper集群的地址-->

    <property>

        <name>yarn.resourcemanager.zk-address</name>

        <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>

    </property>

    <!--啟用自動恢復-->

    <property>

        <name>yarn.resourcemanager.recovery.enabled</name>

        <value>true</value>

    </property>

    <!--指定resourcemanager的狀態信息存儲在zookeeper集群-->

    <property>

        <name>yarn.resourcemanager.store.class</name>   

        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>

    </property>

</configuration>

 

4.2.3.    分發hadoop文件到所有節點

[hadoop@hadoop1 hadoop]$ for i in `seq 2 3`;do scp -r /home/hadoop/app/hadoop hadoop$i:/home/hadoop/app/;done

 

4.2.4.    修改hadoop環境變量

切換到root用戶執行hadoop1

[root@hadoop1 ~]# cat /etc/profile

# /etc/profile

 

# System wide environment and startup programs, for login setup

# Functions and aliases go in /etc/bashrc

 

# It's NOT a good idea to change this file unless you know what you

# are doing. It's much better to create a custom.sh shell script in

# /etc/profile.d/ to make custom changes to your environment, as this

# will prevent the need for merging in future updates.

 

pathmunge () {

    case ":${PATH}:" in

        *:"$1":*)

            ;;

        *)

            if [ "$2" = "after" ] ; then

                PATH=$PATH:$1

            else

                PATH=$1:$PATH

            fi

    esac

}

 

 

if [ -x /usr/bin/id ]; then

    if [ -z "$EUID" ]; then

        # ksh workaround

        EUID=`id -u`

        UID=`id -ru`

    fi

    USER="`id -un`"

    LOGNAME=$USER

    MAIL="/var/spool/mail/$USER"

fi

 

# Path manipulation

if [ "$EUID" = "0" ]; then

    pathmunge /usr/sbin

    pathmunge /usr/local/sbin

else

    pathmunge /usr/local/sbin after

    pathmunge /usr/sbin after

fi

 

HOSTNAME=`/usr/bin/hostname 2>/dev/null`

HISTSIZE=1000

if [ "$HISTCONTROL" = "ignorespace" ] ; then

    export HISTCONTROL=ignoreboth

else

    export HISTCONTROL=ignoredups

fi

 

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

 

# By default, we want umask to get set. This sets it for login shell

# Current threshold for system reserved uid/gids is 200

# You could check uidgid reservation validity in

# /usr/share/doc/setup-*/uidgid file

if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then

    umask 002

else

    umask 022

fi

 

for i in /etc/profile.d/*.sh ; do

    if [ -r "$i" ]; then

        if [ "${-#*i}" != "$-" ]; then

            . "$i"

        else

            . "$i" >/dev/null

        fi

    fi

done

 

unset i

unset -f pathmunge

export JAVA_HOME=/usr/local/java/jdk1.8.0_181

export      ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

export      HADOOP_HOME=/home/hadoop/app/hadoop

export JRE_HOME=${JAVA_HOME}/jre

export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

export  PATH=${JAVA_HOME}/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$PATH

 

[root@hadoop1 ~]# for i in `seq 2 3`;do scp /etc/profile hadoop$i:/etc/profile;done

Hadoop1-3  source /etc/profile

5.   啟動集群

前提條件:

修改JAVA環境變量:

mapred-env.sh

yarn-env.sh

hadoop-env.sh

export JAVA_HOME /usr/local/java/jdk1.8.0_181

 

5.1. 啟動zookeeper

[hadoop@hadoop1 ~]$ zkServer.sh  start

[hadoop@hadoop2 ~]$ zkServer.sh  start

[hadoop@hadoop3 ~]$ zkServer.sh  start

檢查狀態

[hadoop@hadoop1 ~]$ zkServer.sh  status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: follower

[hadoop@hadoop2 ~]$ zkServer.sh  status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: leader

[hadoop@hadoop3 ~]$ zkServer.sh  status

ZooKeeper JMX enabled by default

Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg

Mode: follower

[hadoop@hadoop1 ~]$ jps

QuorumPeerMain(存在)

[hadoop@hadoop2 ~]$ jps

 QuorumPeerMain(存在)

[hadoop@hadoop3 ~]$ jps

 QuorumPeerMain(存在)

 

5.2. 啟動journalnode服務

啟動Journalnode是為了創建namenode元數據存放目錄

[hadoop@hadoop1 ~]$ hadoop-daemon.sh start journalnode

[hadoop@hadoop2 ~]$ hadoop-daemon.sh start journalnode

[hadoop@hadoop3 ~]$ hadoop-daemon.sh start journalnode

結確認

Jps 三個節點都存在JournalNode進程

namenode data目錄下創建目錄成功

 

5.3. 格式化主Namenode

在主namenode節點執行

[hadoop@hadoop1 ~]$hdfs namenode –format

結果確認:存在以下目錄

 

5.4. 啟動主Namenode

[hadoop@hadoop1 ~]$ hadoop-daemon.sh start namenode

Jps查看存在namenode

5.5. 同步主Namenode(hadoop1)的信息到備namenode(hadoop2)上面

namenode(hadoop2)上面執行命令同步

[hadoop@hadoop2 ~]$hdfs namenode –bootstrapStandby

結果確認:data目錄下存在以下目錄文件

 

5.6. 只在主NameNode上格式化zkfc,創建znode節點

[hadoop@hadoop1 ~]$ hdfs zkfc –formatZK

結果存在zkCli.sh

 

5.7. 停掉主namenode和所有的journalnode進程

[hadoop@hadoop1 ~]$hadoop-daemon.sh stop namenode

[hadoop@hadoop1 ~]$ hadoop-daemon.sh stop journalnode

[hadoop@hadoop2 ~]$ hadoop-daemon.sh stop journalnode

[hadoop@hadoop3 ~]$ hadoop-daemon.sh stop journalnode

jps檢查只存在zookeeper

5.8. 啟動hdfs

[hadoop@hadoop1 ~]$start-dfs.sh

Jps檢查進程

5.9. 問題處理

重新格式化后jps檢查其余都正常只是沒有datanode

在對應的hadoop2-hadoop3的log下面查看報錯:

問題分析:

 

datanode的clusterID 和 namenode的clusterID 不匹配

查看namenode節點上的cluserID

路徑:

Cat VERSION

所有dataNode的clusterID查看路徑

NAMENODE的CLUSTERID復制覆蓋DATANODE的CLUSTERID

啟動正常

6.   啟動yarn集群

6.1. 在主namenode節點上啟動yarn集群

 

6.2. 在備節點上手動啟動resourceMabager

6.3. 訪問yarn

6.4. 集群角色分布情況


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM