HBase HA的分布式集群部署(適合3、5節點)


本博文的主要內容有:

  .HBase的分布模式(3、5節點)安裝

    .HBase的分布模式(3、5節點)的啟動

  .HBase HA的分布式集群的安裝

  .HBase HA的分布式集群的啟動

    .HBase HA的切換

 

 

 

 

 

 HBase HA分布式集群搭建———集群架構

 

 

 

HBase HA分布式集群搭建———安裝步驟

 

 

 

 

 

 

 

 

 

 

HBase的分布模式(3、5節點)安裝

1、分別對djt11、djt12、djt13、djt14、djt15的啟動進程恢復到沒有任何啟動進程的狀態。

[hadoop@djt11 hadoop]$ pwd

[hadoop@djt11 hadoop]$ jps

 

 

[hadoop@djt12 hadoop]$ jps

 

[hadoop@djt13 hadoop]$ jps

 

[hadoop@djt14 hadoop]$ jps

 

[hadoop@djt15 hadoop]$ jps

 

 

 

2、切換到app安裝目錄

下載HBase壓縮包

 

 

[hadoop@djt11 hadoop]$ pwd

[hadoop@djt11 hadoop]$ cd ..

[hadoop@djt11 app]$ pwd

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ rz

[hadoop@djt11 app]$ ls

 

[hadoop@djt11]$ tar -zxvf hbase-0.98.19-hadoop2-bin.tar.gz

 

[hadoop@djt11 app]$ pwd

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ mv hbase-0.98.19-hadoop2 hbase

[hadoop@djt11 app]$ rm -rf hbase-0.98.19-hadoop2-bin.tar.gz

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ pwd

[hadoop@djt11 app]$

 

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ cd hbase/

[hadoop@djt11 hbase]$ ls

[hadoop@djt11 hbase]$ cd conf/

[hadoop@djt11 conf]$ pwd

[hadoop@djt11 conf]$ ls

[hadoop@djt11 conf]$ vi regionservers

 

djt13

djt14

djt15

 

[hadoop@djt11 conf]$ vi backup-masters

 

djt12

 

[hadoop@djt11 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/core-site.xml ./
[hadoop@djt11 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml ./

 

[hadoop@djt11 conf]$ vi hbase-site.xml

 

<configuration>

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>djt11,djt12,djt13,djt14,djt15</value>

        </property>

        <property>

                <name>hbase.zookeeper.property。dataDir</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://cluster1/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

        <property>

<name>hbase.tmp.dir</name>

<value>/home/hadoop/data/tmp/hbase</value>

         </property>

 

        <property>

                <name>hbase.master</name>

                <value>hdfs://djt11:60000</value>

        </property>

</configuration>

 

 

vi hbase-env.sh

 

#export JAVA_HOME=/usr/java/jdk1.6.0/

 

修改為,

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

export HBASE_MANAGES_ZK=true

 

這里,有一個知識點。

進程HQuorumPeer,設HBASE_MANAGES_ZK=true,在啟動HBase時,HBase把Zookeeper作為自身的一部分運行。

進程QuorumPeerMain,設HBASE_MANAGES_ZK=false,先手動啟動Zookeeper,再啟動HBase。

 

[hadoop@djt11 conf]$ pwd

[hadoop@djt11 conf]$ su root

[root@djt11 conf]# pwd

[root@djt11 conf]# vi /etc/profile

 

 

JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

HADOOP_HOME=/home/hadoop/app/hadoop

HIVE_HOME=/home/hadoop/app/hive

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:/home/hadoop/tools:$PATH

export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME HIVE_HOME

 

 

JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

HADOOP_HOME=/home/hadoop/app/hadoop

HIVE_HOME=/home/hadoop/app/hive

HBASE_HOME=/home/hadoop/app/hbase

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:/home/hadoop/tools:$PATH

export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME HIVE_HOME HBASE_HOME

 

[root@djt11 conf]# source /etc/profile

[root@djt11 conf]# su hadoop

 

這個腳本,我們之前在搭建hadoop的5節點時,已經寫好並可用的。這里,我們查看下並溫習。不作修改

 

 

 

將djt11的hbase分發到slave,即djt12、djt13、djt14、djt15

[hadoop@djt11 tools]$ pwd

[hadoop@djt11 tools]$ cd /home/hadoop/app/

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ deploy.sh hbase /home/hadoop/app/ slave

 

查看分發后的結果情況

表明,分發成功!

接下來,分別也跟djt11億元,進行djt12、djt13、djt14、djt15的配置。

 

 

djt12的配置:

[hadoop@djt12 hbase]$ pwd

[hadoop@djt12 hbase]$ ls

[hadoop@djt12 hbase]$ cd conf/

[hadoop@djt12 conf]$ pwd

[hadoop@djt12 conf]$ ls

 

[hadoop@djt12 conf]$ vi regionservers

 

都是已經配置好了的

 

[hadoop@djt12 conf]$ vi backup-masters

 

都是之前已經配置好了的

 

[hadoop@djt12 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/core-site.xml ./
[hadoop@djt12 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml ./ 

 

[hadoop@djt12 conf]$ vi hbase-site.xml

 

<configuration>

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>djt11,djt12,djt13,djt14,djt15</value>

        </property>

        <property>

                <name>hbase.zookeeper.property</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://cluster1/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

        <property>

                <name>hbase.master</name>

                <value>hdfs://djt11:60000</value>

        </property>

</configuration>

 

 

[hadoop@djt12 conf]$ vi hbase-env.sh

 

[hadoop@djt12 conf]$ pwd

[hadoop@djt12 conf]$ su root

[root@djt12 conf]# pwd

[root@djt12 conf]# vi /etc/profile

 

[root@djt12 conf]# cd ..

[root@djt12 hbase]# pwd

[root@djt12 hbase]# su hadoop

[hadoop@djt12 hbase]$ pwd

[hadoop@djt12 hbase]$ ls

[hadoop@djt12 hbase]$

 

 

djt13的配置

[hadoop@djt13 app]$ pwd

[hadoop@djt13 app]$ ls

[hadoop@djt13 app]$ cd hbase/

[hadoop@djt13 hbase]$ ls

[hadoop@djt13 hbase]$ cd conf/

[hadoop@djt13 conf]$ ls

[hadoop@djt13 conf]$ vi regionservers

 

[hadoop@djt13 conf]$ vi backup-masters

 

[hadoop@djt13 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/core-site.xml ./
[hadoop@djt13 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml ./

 

[hadoop@djt13 conf]$ vi hbase-site.xml

 

<configuration>

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>djt11,djt12,djt13,djt14,djt15</value>

        </property>

        <property>

                <name>hbase.zookeeper.property</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://cluster1/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

        <property>

                <name>hbase.master</name>

                <value>hdfs://cluster1:60000</value>

        </property>

  (注意,我的圖片里是錯誤的!)  因為是做了高可用,是cluster1而不是單獨的djt11。cluster1包括了djt11和djt12

</configuration>

 

[hadoop@djt13 conf]$ pwd

[hadoop@djt13 conf]$ su root

[root@djt13 conf]# pwd

[root@djt13 conf]# vi /etc/profile

 

JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

HADOOP_HOME=/home/hadoop/app/hadoop

HIVE_HOME=/home/hadoop/app/hive

HBASE_HOME=/home/hadoop/app/hbase

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:/home/hadoop/tools:$PATH

export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME HIVE_HOME HBASE_HOME

 

[hadoop@djt13 conf]$ pwd

[hadoop@djt13 conf]$ su root

[root@djt13 conf]# pwd

[root@djt13 conf]# vi /etc/profile

[root@djt13 conf]# source /etc/profile

[root@djt13 conf]# cd ..

[root@djt13 hbase]# pwd

[root@djt13 hbase]# su hadoop

[hadoop@djt13 hbase]$ pwd

[hadoop@djt13 hbase]$ ls

[hadoop@djt13 hbase]$

 

djt14的配置

[hadoop@djt14 app]$ pwd

[hadoop@djt14 app]$ ls

[hadoop@djt14 app]$ cd hbase/

[hadoop@djt14 hbase]$ ls

[hadoop@djt14 hbase]$ cd conf/

[hadoop@djt14 conf]$ ls

[hadoop@djt14 conf]$ vi regionservers

 

[hadoop@djt14 conf]$ vi backup-masters

 

[hadoop@djt14 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/core-site.xml ./
[hadoop@djt14 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml ./ 

 

[hadoop@djt14 conf]$ vi hbase-site.xml

 

<configuration>

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>djt11,djt12,djt13,djt14,djt15</value>

        </property>

        <property>

                <name>hbase.zookeeper.property</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://cluster1/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

        <property>

                <name>hbase.master</name>

                <value>hdfs://djt11:60000</value>

        </property>

</configuration>

 

 

[hadoop@djt14 conf]$ vi hbase-env.sh

 

[hadoop@djt14 conf]$ pwd

[hadoop@djt14 conf]$ su root

[root@djt14 conf]# pwd

[root@djt14 conf]# vi /etc/profile

 

[hadoop@djt14 conf]$ pwd

[hadoop@djt14 conf]$ su root

[root@djt14 conf]# pwd

[root@djt14 conf]# vi /etc/profile

[root@djt14 conf]# source /etc/profile

[root@djt14 conf]# cd ..

[root@djt14 hbase]# pwd

[root@djt14 hbase]# su hadoop

[hadoop@djt14 hbase]$ pwd

[hadoop@djt14 hbase]$ ls

[hadoop@djt14 hbase]$

 

 

 

djt15的配置

 

[hadoop@djt15 app]$ pwd

[hadoop@djt15 app]$ ls

[hadoop@djt15 app]$ cd hbase/

[hadoop@djt15 hbase]$ ls

[hadoop@djt15 hbase]$ cd conf/

[hadoop@djt15 conf]$ ls

[hadoop@djt15 conf]$ vi regionservers

 

[hadoop@djt15 conf]$ vi backup-masters

 

[hadoop@djt15 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/core-site.xml ./
[hadoop@djt15 conf]$ cp /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml ./ 

 

[hadoop@djt15 conf]$ vi hbase-site.xml

 

<configuration>

        <property>

                <name>hbase.zookeeper.quorum</name>

                <value>djt11,djt12,djt13,djt14,djt15</value>

        </property>

        <property>

                <name>hbase.zookeeper.property</name>

                <value>/home/hadoop/data/zookeeper/zkdata</value>

        </property>

        <property>

                <name>hbase.zookeeper.property.clientPort</name>

                <value>2181</value>

        </property>

        <property>

                <name>hbase.rootdir</name>

                <value>hdfs://cluster1/hbase</value>

        </property>

        <property>

                <name>hbase.cluster.distributed</name>

                <value>true</value>

        </property>

        <property>

                <name>hbase.master</name>

                <value>hdfs://djt11:60000</value>

        </property>

</configuration>

 

[hadoop@djt15 conf]$ vi hbase-env.sh

 

[hadoop@djt15 conf]$ pwd

[hadoop@djt15 conf]$ su root

[root@djt15 conf]# pwd

[root@djt15 conf]# vi /etc/profile

 

[hadoop@djt15 conf]$ pwd

[hadoop@djt15 conf]$ su root

[root@djt15 conf]# pwd

[root@djt15 conf]# vi /etc/profile

[root@djt15 conf]# source /etc/profile

[root@djt15 conf]# cd ..

[root@djt15 hbase]# pwd

[root@djt15 hbase]# su hadoop

[hadoop@djt15 hbase]$ pwd

[hadoop@djt15 hbase]$ ls

[hadoop@djt15 hbase]$

 

 

    .HBase的分布模式(3、5節點)的啟動

 

 

 

這里,只需啟動sbin/start-dfs.sh即可。

不需sbin/start-all.sh (它包括sbin/start-dfs.sh和sbin/start-yarn.sh)

啟動zookeeper,是因為,hbase是建立在zookeeper之上的。數據是保存在hdfs。

 

 

[hadoop@djt11 app]$ jps

[hadoop@djt11 app]$ ls

[hadoop@djt11 app]$ cd hadoop/

[hadoop@djt11 hadoop]$ ls

[hadoop@djt11 hadoop]$ sbin/start-dfs.sh

 

[hadoop@djt11 hadoop]$ jps

 

[hadoop@djt12 hadoop]$ jps

 

[hadoop@djt13 app]$ cd hadoop/

[hadoop@djt13 hadoop]$ jps

 

[hadoop@djt14 app]$ cd hadoop/

[hadoop@djt14 hadoop]$ pwd

[hadoop@djt14 hadoop]$ jps

 

 

[hadoop@djt15 app]$ cd hadoop/

[hadoop@djt15 hadoop]$ pwd

[hadoop@djt15 hadoop]$ jps

 

[hadoop@djt11 hbase]$ bin/start-hbase.sh

[hadoop@djt11 hbase]$ jps

 

[hadoop@djt12 hbase]$ cd ..

[hadoop@djt12 app]$ ls

[hadoop@djt12 app]$ cd hbase/

[hadoop@djt12 hbase]$ pwd

[hadoop@djt12 hbase]$ jps

 

[hadoop@djt13 hbase]$ cd ..

[hadoop@djt13 app]$ ls

[hadoop@djt13 app]$ cd hbase/

[hadoop@djt13 hbase]$ pwd

[hadoop@djt13 hbase]$ jps

 

[hadoop@djt14 hbase]$ cd ..

[hadoop@djt14 app]$ ls

[hadoop@djt14 app]$ cd hbase/

[hadoop@djt14 hbase]$ pwd

[hadoop@djt14 hbase]$ jps

 

[hadoop@djt15 hbase]$ cd ..

[hadoop@djt15 app]$ ls

[hadoop@djt15 app]$ cd hbase/

[hadoop@djt15 hbase]$ pwd

[hadoop@djt15 hbase]$ jps

 

 

 

 

 

 

那么,djt11的master被殺死掉,則訪問不到了。

 

 

 

 

然后,我們再把djt11的master啟起來,

 

 

 

 

則,djt11由不可訪問,變成備用的master了。djt12依然還是主用的master

   成功!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM