基於Docker進行Zookeeper集群的安裝


需要先部署jdk環境

  • 這次通過手工部署的方式, 先上傳jdk的tar包

    [root@iz8vb6evwfagx3tyjx4fl8z soft]# ll
    total 189496
    -rw-r--r-- 1 root root 194042837 Apr  8 14:11 jdk-8u202-linux-x64.tar.gz
    
  • 解壓到指定目錄

    mkdir -p /opt/test/java
    tar -zxvf jdk-8u202-linux-x64.tar.gz -C /opt/test/java 
    
  • vim /etc/profile 修改環境變量添加jdk環境

    JAVA_HOME=/opt/test/java/jdk1.8.0_202
    CLASSPATH=$JAVA_HOME/lib/
    PATH=$PATH:$JAVA_HOME/bin
    export PATH JAVA_HOME CLASSPATH
    
  • source /etc/profile 使配置生效

  • 查看jdk版本

    [root@iz8vb6evwfagx3tyjx4fl8z soft]# java -version
    java version "1.8.0_202"
    Java(TM) SE Runtime Environment (build 1.8.0_202-b08)
    Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)
    

搭建Zookeeper集群

  • 先創建節點文件夾

    cd /opt/test/
    mkdir cluster/node01 -p && mkdir cluster/node02 -p && mkdir cluster/node03 -p
    
  • 設定機器ip

    machine_ip=121.89.209.190
    
  • 運行節點1

    docker run -d -p 2181:2181 -p 2887:2888 -p 3887:3888 --name zookeeper_node01 --restart always \
    -v $PWD/cluster/node01/volume/data:/data \
    -v $PWD/cluster/node01/volume/datalog:/datalog \
    -e "TZ=Asia/Shanghai" \
    -e "ZOO_MY_ID=1" \
    -e "ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=$machine_ip:2888:3888 server.3=$machine_ip:2889:3889" \
    zookeeper:3.4.13
    
  • 運行節點2

    docker run -d -p 2182:2181 -p 2888:2888 -p 3888:3888 --name zookeeper_node02 --restart always \
    -v $PWD/cluster/node02/volume/data:/data \
    -v $PWD/cluster/node2/volume/datalog:/datalog \
    -e "TZ=Asia/Shanghai" \
    -e "ZOO_MY_ID=2" \
    -e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=0.0.0.0:2888:3888 server.3=$machine_ip:2889:3889" \
    zookeeper:3.4.13
    
  • 運行節點3

docker run -d -p 2183:2181 -p 2889:2888 -p 3889:3888 --name zookeeper_node03 --restart always \
-v $PWD/cluster/node03/volume/data:/data \
-v $PWD/cluster/node03/volume/datalog:/datalog \
-e "TZ=Asia/Shanghai" \
-e "ZOO_MY_ID=3" \
-e "ZOO_SERVERS=server.1=$machine_ip:2887:3887 server.2=$machine_ip:2888:3888 server.3=0.0.0.0:2888:3888" \
zookeeper:3.4.13
  • 查看docker鏡像日志

    docker logs -f 容器ID
    
    • 然后你會發現在報連接錯誤

      java.net.ConnectException: Connection refused (Connection refused)
      	at java.net.PlainSocketImpl.socketConnect(Native Method)
      	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
      	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
      	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
      	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
      	at java.net.Socket.connect(Socket.java:589)
      	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
      	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534)
      	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454)
      	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435)
      	at java.lang.Thread.run(Thread.java:748)
      2020-04-08 16:00:44,614 [myid:1] - WARN  [RecvWorker:1:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 1, my id = 1, error = 
      java.io.EOFException
      
    • 原因是 默認的Docker網絡模式下,通過宿主機的IP+映射端口, 而node01找不到node02和node03

  • 查找每個容器的ip

    docker inspect 容器ID
    
    • node01: 172.17.0.2
    • node02: 172.17.0.3
    • node03: 172.17.0.4
  • 我們知道了它有自己的IP,那又出現另一個問題了,就是它的ip是動態的,啟動之前我們無法得知。有個解決辦法就是創建自己的bridge網絡,然后創建容器的時候指定ip。

  • 所以以上全部要推倒重來......

  • 停止並刪除所有鏡像

    docker stop $(docker ps -a -q)
    docker rm $(docker ps -a -q)
    

[重新開始]

  • 創建自己的橋接網絡

    docker network create --driver bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 zoonet
    
  • 查看docker網絡

    [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    a121ed854d1c        bridge              bridge              local
    ab9083cbac8a        host                host                local
    4d3012b89f70        none                null                local
    26b8cbf5b4c9        zoonet              bridge              local
    
  • 檢查橋接網絡

    docker network inspect 26b8cbf5b4c9
    
    • 查詢結果

      [
          {
              "Name": "zoonet",
              "Id": "26b8cbf5b4c9d086b81edc22f4627de5ef71a8745374554b440d394ad40858f4",
              "Created": "2020-04-08T16:25:00.982635799+08:00",
              "Scope": "local",
              "Driver": "bridge",
              "EnableIPv6": false,
              "IPAM": {
                  "Driver": "default",
                  "Options": {},
                  "Config": [
                      {
                          "Subnet": "172.18.0.0/16",
                          "Gateway": "172.18.0.1"
                      }
                  ]
              },
              "Internal": false,
              "Attachable": false,
              "Ingress": false,
              "ConfigFrom": {
                  "Network": ""
              },
              "ConfigOnly": false,
              "Containers": {},
              "Options": {},
              "Labels": {}
          }
      ]
      
      
  • 修改Zookeeper容器的創建命令

    • 運行節點1

      docker run -d -p 2181:2181 --name zookeeper_node01 --privileged --restart always --network zoonet --ip 172.18.0.2 \
      -v /opt/test/cluster/node01/volume/data:/data \
      -v /opt/test/cluster/node01/volume/data/datalog:/datalog \
      -v /opt/test/cluster/node01/volume/data/logs:/logs \
      -e ZOO_MY_ID=1 \
      -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72 #(這個是Zookeeper鏡像的ip)
      
    • 運行節點2

      docker run -d -p 2182:2181 --name zookeeper_node02 --privileged --restart always --network zoonet --ip 172.18.0.3 \
      -v /opt/test/cluster/node02/volume/data:/data \
      -v /opt/test/cluster/node02/volume/datalog:/datalog \
      -v /opt/test/cluster/node02/volume/logs:/logs \
      -e ZOO_MY_ID=2 \
      -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
      
      
    • 運行節點3

      docker run -d -p 2183:2181 --name zookeeper_node03 --privileged --restart always --network zoonet --ip 172.18.0.4 \
      -v /opt/test/cluster/node03/volume/data:/data \
      -v /opt/test/cluster/node03/volume/datalog:/datalog \
      -v /opt/test/cluster/node03/volume/logs:/logs \
      -e ZOO_MY_ID=3 \
      -e "ZOO_SERVERS=server.1=172.18.0.2:2888:3888 server.2=172.18.0.3:2888:3888 server.3=172.18.0.4:2888:3888" 4ebfb9474e72
      
    • 查看容器

         [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker ps
      CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                                        NAMES
      82753d13ac44        4ebfb9474e72        "/docker-entrypoint.…"   21 seconds ago       Up 21 seconds       2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp   zookeeper_node03
      eee56297eb96        4ebfb9474e72        "/docker-entrypoint.…"   42 seconds ago       Up 41 seconds       2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp   zookeeper_node02
      ee8a9710fa3e        4ebfb9474e7         "/docker-entrypoint.…"   About a minute ago   Up About a minute   2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp   zookeeper_node01
      
      
    • 這時候再查看容器日志

      docker logs -f 容器ID
      
      • 沒有報錯
    • 這時候我們再進入容器檢查一下

      # node01
      [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it ee8a9710fa3e bash
      bash-4.4# zkServer.sh status
      ZooKeeper JMX enabled by default
      Using config: /conf/zoo.cfg
      Mode: follower
      
      # node02
      [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it eee56297eb96  bash
      bash-4.4# zkServer.sh status
      ZooKeeper JMX enabled by default
      Using config: /conf/zoo.cfg
      Mode: leader
      
      # node03
      [root@iz8vb6evwfagx3tyjx4fl8z ~]# docker exec -it 82753d13ac44  bash
      bash-4.4# zkServer.sh status
      ZooKeeper JMX enabled by default
      Using config: /conf/zoo.cfg
      Mode: follower
      
      
      • 各節點狀態良好, 集群搭建完成。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM