docker 搭建zookeeper集群
安裝docker-compose容器編排工具
Compose介紹
Docker Compose 是 Docker 官方編排(Orchestration)項目之一,負責快速在集群中部署分布式應用。
Compose 項目是 Docker 官方的開源項目,負責實現對 Docker 容器集群的快速編排。Compose 定位是 「定義和運行多個 Docker 容器的應用(Defining and running multicontainer Docker applications)」,其前身是開源項目 Fig。
使用一個 Dockerfile 模板文件,可以讓用戶很方便的定義一個單獨的應用容器。然而,在日常工作中,經常會碰到需要多個容器相互配合來完成某項任務的情況。例如要實現一個 Web 項目,除了 Web 服務容器本身,往往還需要再加上后端的數據庫服務容器,甚至還包括負載均衡容器等。
Compose 恰好滿足了這樣的需求。它允許用戶通過一個單獨的 docker-compose.yml 模板文件(YAML 格式)來定義一組相關聯的應用容器為一個項目(project)。
Compose 中有兩個重要的概念:
- 服務 ( service ):一個應用的容器,實際上可以包括若干運行相同鏡像的容器實例
- 項目 ( project ):由一組關聯的應用容器組成的一個完整業務單元,在 dockercompose.yml 文件中定義。
Compose 的默認管理對象是項目,通過子命令對項目中的一組容器進行便捷地生命周期管理。可見,一個項目可以由多個服務(容器)關聯而成, Compose 面向項目進行管理
Compose 項目由 Python 編寫,實現上調用了 Docker 服務提供的 API 來對容器進行管理。因此,只要所操作的平台支持 Docker API,就可以在其上利用 Compose 來進行編排管理。
安裝與卸載
Compose 可以通過 Python 的包管理工具 pip 進行安裝,也可以直接下載編譯好的二進制文件使用,甚至能夠直接在 Docker 容器中運行。前兩種方式是傳統方式,適合本地環境下安裝使用;最后一種方式則不破壞系統環境,更適合雲計算場景。Docker for Mac 、 Docker for Windows 自帶 docker-compose 二進制文件,安裝 Docker 之后可以直接使用。Linux 系統請使用以下介紹的方法安裝。
二進制包安裝
在 Linux 上的也安裝十分簡單,從 官方 GitHub Release 處直接下載編譯好的二進制文件即可。
例如,在 Linux 64 位系統上直接下載對應的二進制包。
sudo curl -L https://github.com/docker/compose/releases/download/1.17.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
對於卸載如果是二進制包方式安裝的,刪除二進制文件即可。
sudo rm /usr/local/bin/docker-compose
PIP 安裝
這種方式是將 Compose 當作一個 Python 應用來從 pip 源中安裝。執行安裝命令:
sudo pip install -U docker-compose
使用PIP安裝的時候,卸載可以使用如下命令:
sudo pip uninstall docker-compose
通過docker-compose 安裝zookeeper集群
新建docker-compose.yml文件
在工作目錄/docker-compose/zookeeper目錄下創建docker-compose.yml文件
添加內容如下:
version: '3.4'
services:
zoo1:
image: zookeeper:3.4 # 鏡像名稱
restart: always # 當發生錯誤時自動重啟
hostname: zoo1
container_name: zoo1
privileged: true
ports: # 端口
- 2184:2181
volumes: # 掛載數據卷
- ./zoo1/data:/data
- ./zoo1/datalog:/datalog
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 1 # 節點ID
ZOO_PORT: 2181 # zookeeper端口號
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # zookeeper節點列表
networks:
mynetwork:
ipv4_address: 172.18.0.4
zoo2:
image: zookeeper:3.4
restart: always
hostname: zoo2
container_name: zoo2
privileged: true
ports:
- 2182:2181
volumes:
- ./zoo2/data:/data
- ./zoo2/datalog:/datalog
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 2
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
mynetwork:
ipv4_address: 172.18.0.5
zoo3:
image: zookeeper:3.4
restart: always
hostname: zoo3
container_name: zoo3
privileged: true
ports:
- 2183:2181
volumes:
- ./zoo3/data:/data
- ./zoo3/datalog:/datalog
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 3
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
mynetwork:
ipv4_address: 172.18.0.6
networks:
mynetwork:
external:
name: mynetwork
創建自定義網絡
docker network ls #查看當前網絡
docker network create --subnet=172.18.0.0/16 mynetwork #創建子網段為172.18.0.0/16 的IP網絡
啟動zookeeper集群
docker-compose up -d
關於docker-compose命令
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
查看集群是否啟動成功
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
zoo1 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2184->2181/tcp, 2888/tcp, 3888/tcp
zoo2 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp
zoo3 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp
檢查集群狀態
zoo1
$ docker exec -it zookeeper_1 /bin/sh
/zookeeper-3.4.11 # zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower // 這是個follower
zoo2
$ docker exec -it zookeeper_2 /bin/sh
/zookeeper-3.4.11 # zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader // 這是個leader
zoo3
$ docker exec -it zookeeper_3 /bin/sh
/zookeeper-3.4.11 # zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower // 這也是個follower哦
zookeeper集群搭建完畢!it‘s over!
基於docker-compose搭建kafka集群
新建docker-compose.yml文件
在工作目錄/docker-compose/kafka目錄下創建docker-compose.yml文件
添加如下內容
version: '2'
services:
broker1:
image: wurstmeister/kafka
restart: always
hostname: broker1
container_name: broker1
privileged: true
ports:
- "9091:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: PLAINTEXT://broker1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:9092
KAFKA_ADVERTISED_HOST_NAME: broker1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker1:/kafka/kafka\-logs\-broker1
external_links:
- zoo1
- zoo2
- zoo3
networks:
mynetwork:
ipv4_address: 172.18.0.14
broker2:
image: wurstmeister/kafka
restart: always
hostname: broker2
container_name: broker2
privileged: true
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 2
KAFKA_LISTENERS: PLAINTEXT://broker2:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2:9092
KAFKA_ADVERTISED_HOST_NAME: broker2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9977
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker2:/kafka/kafka\-logs\-broker2
external_links: # 連接本compose文件以外的container
- zoo1
- zoo2
- zoo3
networks:
mynetwork:
ipv4_address: 172.18.0.15
broker3:
image: wurstmeister/kafka
restart: always
hostname: broker3
container_name: broker3
privileged: true
ports:
- "9093:9092"
environment:
KAFKA_BROKER_ID: 3
KAFKA_LISTENERS: PLAINTEXT://broker3:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3:9092
KAFKA_ADVERTISED_HOST_NAME: broker3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9999
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker3:/kafka/kafka\-logs\-broker3
external_links: # 連接本compose文件以外的container
- zoo1
- zoo2
- zoo3
networks:
mynetwork:
ipv4_address: 172.18.0.16
kafka-manager:
image: sheepkiller/kafka-manager:latest
restart: always
container_name: kafka-manager
hostname: kafka-manager
ports:
- "9000:9000"
links: # 連接本compose文件創建的container
- broker1
- broker2
- broker3
external_links: # 連接本compose文件以外的container
- zoo1
- zoo2
- zoo3
environment:
ZK_HOSTS: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
KAFKA_BROKERS: broker1:9092,broker2:9092,broker3:9092
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
networks:
mynetwork:
ipv4_address: 172.18.0.10
networks:
mynetwork:
external: # 使用已創建的網絡
name: mynetwork
共用zookeeper創建的網絡
啟動集群
docker-compose up -d
驗證集群
docker exec -it broker1 bash
cd /opt/kafka_2.11-2.0.0/bin/
./kafka-topics.sh --create --zookeeper zoo1:2181 --replication-factor 1 --partitions 8 --topic test
./kafka-console-producer.sh --broker-list localhost:9092 --topic test
一般情況下上面這種就能驗證集群了,但是在此處會拋出如下異常
bash-4.4# kafka-topics.sh --create --zookeeper zoo1:2181 --replication-factor 1 --partitions 1 --topic mykafka
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9977; nested exception is:
java.net.BindException: Address in use (Bind failed)
sun.management.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 9977; nested exception is:
java.net.BindException: Address in use (Bind failed)
at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:480)
at sun.management.Agent.startAgent(Agent.java:262)
at sun.management.Agent.startAgent(Agent.java:452)
Caused by: java.rmi.server.ExportException: Port already in use: 9977; nested exception is:
java.net.BindException: Address in use (Bind failed)
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346)
at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:254)
at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411)
at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:237)
at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:213)
at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:173)
at sun.management.jmxremote.SingleEntryRegistry.<init>(SingleEntryRegistry.java:49)
at sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:468)
... 2 more
Caused by: java.net.BindException: Address in use (Bind failed)
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createServerSocket(RMIDirectSocketFactory.java:45)
at sun.rmi.transport.proxy.RMIMasterSocketFactory.createServerSocket(RMIMasterSocketFactory.java:345)
at sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:666)
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
... 11 more
是不是很奇怪?為嘛報錯JMX錯誤
百度找了很久,有人這樣解決:
解決方法:
- 在每一個kafka節點加上環境變量 JMX_PORT=端口
- 加上之后發現連不上,又是網絡連接的問題,於是又把每個jmx端口暴露出來,然后fire-wall放行, 解決問題。
- KAFKA_ADVERTISED_HOST_NAME這個最好設置宿主機的ip,宿主機以外的代碼或者工具來連接,后面的端口也需要設置暴露的端口。
但是親測無效
解決方案
unset JMX_PORT;bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
在命令之前先重置一下unset JMX_PORT;
親測有效!
如下:
bin/kafka-topics.sh -create --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1 --replication-factor 1 --partitions 1 --topic mykafka
Created topic mykafka.
驗證kafka管理端
查看一下localhost:9000端口看能否出現一下界面

這個界面是我已經添加了kafka集群的,如果沒有添加這里是個空頁面

