Zookeeper集群搭建(多節點,單機偽集群,Docker集群)


Zookeeper介紹

原理簡介

ZooKeeper是一個分布式的、開源的分布式應用程序協調服務。它公開了一組簡單的原語,分布式應用程序可以在此基礎上實現更高級別的同步、配置維護、組和命名服務。它的設計易於編程,並使用了一個數據模型樣式后熟悉的目錄樹結構的文件系統。它在Java中運行,並且有Java和C的綁定。

集群模型

在Zokeeper集群中,server有3重角色和四種狀態

角色 狀態
leader,follower,observer leading,following,observing,looking

Zookeeper不同狀態

LOOKING:當前Server正在尋找leader。
LEADING:當前Server為選舉出的leader。
FOLLOWING:leader已經選舉出來,當前Server與之同步。
OBSERVING:observer在大多數情況下與follower完全一致,但是他們不參加選舉和投票,僅接受(observing)選舉和投票的結果。

Zookeeper集群模型如圖所示:

Zookeeper基於內存存儲,數組存儲模型類似於Linux文件系統模型,基於根節點/存儲

zookeeper節點

zookeeper模型是由一個leader和多個follower構成,這就說明leader是單節點的,單節點面臨的問題就是有可能單機故障,從這點來看zookeeper似乎並不可靠,但實際生產中zookeeper是極其可靠的,從某個角度來說zookeeper可以快速自我修復,這種修復就是在leader故障后,follower之間通信快速選舉恢復出一個新的leader,選舉的標准要求

可用節點數量 > 總節點數量/2

並且選舉要避免腦裂這種情況,集群的腦裂通常是發生在節點之間通信不可達的情況下,集群會分裂成不同的小集群,小集群各自選出自己的master節點,導致原有的集群出現多個master節點的情況,這就是腦裂。

多節點集群

zookeeper下載

下載地址:http://archive.apache.org/dist/zookeeper/

在Zookeeper3.5.5以后需要下載名稱帶有bin的才是編譯的可執行包,沒有帶bin的為源碼包無法使用。

環境配置

搭建zookeeper集群需要的外部環境准備

VMware Workstation 15
JDK1.8
zookeeper-3.5.6

搭建zookeeper集群前測試一下各主機間是否互通

主機名 操作系統 IP
node1 CentOS 7 192.168.189.128
node2 CentOS 7 192.168.189.129
node3 CentOS 7 192.168.189.130

注:后面所有操作都在node1 192.168.189.128執行

這里需要對對多台Linux機器執行相同的命令,推薦一個好用的命令行工具

Secure CRT

打開View–>Command Window,下方彈出空白會話窗口,輸入要執行的命令,鼠標右鍵點擊Send Commands to All Sessions,對所有虛擬機會話都執行命令。
使用Secure CRT對三台虛擬機創建目錄

mkdir /usr/local/zookeeper

將zookeeper安裝包上傳至192.168.189.128 目錄/usr/local/zookeeper/解壓

tar -zxvf apache-zookeeper-3.5.6-bin.tar.gz 

配置zookeeper環境變量

export ZOOKEEPER_HOME=/usr/local/zookeeper/apache-zookeeper-3.5.6-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin

進入zookeeper配置目錄/usr/local/zookeeper/apache-zookeeper-3.5.6-bin/conf,修改配置文件

cp zoo_sample.cfg zoo.cfg
vim zoo.cfg

配置文件

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

配置文件說明

tickTime:zookeeper維持心跳的間隔時間單位。
initLimit:允許follower連接並同步到Leader的初始化連接時間,以tickTime為單位,當初始化連接時間超過該值,則表示連接失敗
syncLimit:follower與leader之間通信應答時間長度,如果follower在間隔(syncLimit * tickTime)不能與leader通信,此follower將會被丟棄
dataDir:zookeeper保存數據的目錄,包含快照日志,事務日志
clientPort:Zookeeper服務器與客戶端通信端口,通過這個端口接收客戶端請求

在zoo.cfg文件末尾追加server.x=A:B:C

server.1=192.168.189.128:2888:3888
server.2=192.168.189.129:2888:3888
server.3=192.168.189.130:2888:3888

x:表示本機zookeeper server id,數值與myid中值一致
A:表示本機zookeeper server ip
B:leader與follower之間通信端口
C:zookeeper啟動時選舉leader通信端口

使用secure CRT在三台虛擬機上,創建dataDir 和 dataLogDir的保存路徑

dataDir:/var/zookeeper/data/
dataLogDir:/var/zookeeper/log/

執行命令

mkdir -p /var/zookeeper/{data,log}

這里補充一下,zookeeper服務器日志有關事項

事務日志、快照日志、log4j日志zookeeper默認將快照日志和事務日志的存儲在zoo.cfg中dataDir配置的路徑,實際應用中,還可以為事務日志專門配置存儲地址,指定dataLogDir路徑,建議將事務日志(dataLogDir)與快照日志(dataLog)單獨配置,zookeeper集群進行頻繁的數據讀寫操作是,會產生大量的事務日志信息,將兩類日志分開存儲會提高系統性能,而且,可以允許將兩類日志存在在不同的存儲介質上,減少磁盤壓力。

zookeeper節點是由人為規划的,在部署集群前已經規划好每台機器的server id,這樣可以讓每台zookeeper server在啟動后選舉出leader,因此需要在zoo.cfg文件中dataDir的路徑下創建一個myid文件,將上述server.X的X寫入到myid文件中

echo 1 > /var/zookeeper/data/myid
ssh root@192.168.189.129 "echo '2' >> `pwd`"
ssh root@192.168.189.130 "echo '3' >> `pwd`"

配置完成后,直接將node1上 /usr/local/zookeeper/apache-zookeeper-3.5.6-bin/文件分發至node2,node3

scp -r /usr/local/zookeeper/apache-zookeeper-3.5.6-bin/ root@192.168.189.129:`pwd`
scp -r /usr/local/zookeeper/apache-zookeeper-3.5.6-bin/ root@192.168.189.129:`pwd`

分發node1中/etc/profile至node2,node3,然后使用secure CRT執行

source /etc/profile

啟動服務

使用secure CRT執行

zkServer.sh start-foreground | start

start-foreground 前台阻塞啟動
start 后台啟動

查看zookeeper server 狀態

zkServer.sh status

正常啟動后,使用zookeeper客戶端連接

zkCli.sh -server ip:port | zkCli.sh

zkCli.sh -server ip:port 指定要連接的zookeeper server服務器
zkCli.sh 默認 zkCli.sh -server localhost:2181

啟動異常

依次啟動zookeeper server 過程中,會尋找已經配置的主機,這個過程會報異常,等待一會,leader選舉出來以后,zookeeper 集群穩定后,就可以正常訪問。
啟動后 zookeeper 如果一直報如下錯誤

java.net.NoRouteToHostException: No route to host

防火牆

查看防火牆狀態:systemctl status firewalld.service
關閉防火牆:systemctl stop firewalld.service

SELinux

查看狀態:/usr/sbin/sestatus -v
臨時關閉:setenforce 0
永久關閉:修改/etc/selinux/config文件將 SELINUX=enforcing 改為 SELINUX=disabled

中間路由器狀態

檢查網絡間是否互通

配置文件

查看zoo.cfg中server狀態是否正確,myid文件是否配置

單機偽集群

zookeeper集群在實際生產中,為了保證其可以高效的響應處理請求,都是單機部署一個節點,這里偽集群搭建可以用來學習,解決無法創建多台虛擬機環境這種狀況,考慮集群部署的方式,偽集群其實也就是分別指定節點路徑,比較簡單,這里簡單模擬。
創建配置文件zoo1.cfg、zoo2.cfg、zoo3.cfg

cp zoo_sample.cf /usr/local/etc/zookeeper/zoo1.cfg
cp zoo_sample.cf /usr/local/etc/zookeeper/zoo2.cfg
cp zoo_sample.cf /usr/local/etc/zookeeper/zoo3.cfg

配置文件內容:

zoo1.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/zookeeper/zoo1/data
dataLogDir=/usr/zookeeper/zoo1/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889

zoo2.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/zookeeper/zoo2/data
dataLogDir=/usr/zookeeper/zoo2/log
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1  
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889

zoo3.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/zookeeper/zoo3/data
dataLogDir=/usr/zookeeper/zoo3/log
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1 
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889

創建dataDir和dataLogDir

mkdir /usr/local/zookeeper/zoo1/{data,log}
mkdir /usr/local/zookeeper/zoo2/{data,log}
mkdir /usr/local/zookeeper/zoo3/{data,log}

創建myid文件

echo 1 > /usr/local/zookeeper/zoo1/data/myid
echo 2 > /usr/local/zookeeper/zoo2/data/myid
echo 3 > /usr/local/zookeeper/zoo3/data/myid

啟動zkServer服務

zkServer.sh start /usr/local/etc/zookeeper/zoo1.cfg
zkServer.sh start /usr/local/etc/zookeeper/zoo2.cfg
zkServer.sh start /usr/local/etc/zookeeper/zoo3.cfg

Docker集群

如果使用容器化技術來部署就顯得簡單的多,而且也可以在單機環境創建多個zookeeper實例,這里借助docker官方提供的編排工具docker-compose,可以參考官方網站:https://hub.docker.com/

docker使用環境

CentOS 7
docker-ce

創建目錄

mkdir /usr/local/docker

拉取官方zookeeper鏡像

docker pull zookeeper

在/usr/local/docker目錄創建docker-compose.yml文件,依舊以三個節點為例

version: '3.1'

services:
  zoo1:
    image: zookeeper
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo2:
    image: zookeeper
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo3:
    image: zookeeper
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

通過docker啟動zookeeper,在docker-compose.yml所在目錄

docker-compose up

查看docker容器運行狀態

$ docker-compose ps
   Name                  Command               State                     Ports                   
-------------------------------------------------------------------------------------------------
zookeeper_1   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
zookeeper_2   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp
zookeeper_3   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp

搭建zookeeper集群的幾種方案,如有錯誤歡迎指正。

實戰應用

Zookeeper作為分布式配置中心和分布式鎖的應用,可以參考:http://starslight.gitee.io/zookeeper


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM