這一篇主要是對 Docker Swarm 的完善,增加基於 overlay 組網通信,以便 Docker 容器可以跨主機訪問。
不同主機間的容器之間通信方式,大概有三種:
- 使用端口映射:直接把容器的服務端口映射到主機上,主機直接通過映射出來的端口通信。
- 把容器放到主機所在的網段:修改 docker 的 ip 分配網段和主機一致,還要修改主機的網絡結構。
- 第三方項目:flannel,weave 或者 pipework 等,這些方案一般都是通過 SDN 搭建 overlay 網絡達到容器通信的。
在使用 overlay 組網通信之前,我們先安裝 Docker,以及 Docker Machine(Linux 下):
$ sudo curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine
$ sudo chmod +x /usr/local/bin/docker-machine
使用腳本一鍵安裝(Docker 鏡像加速地址可以更換):
#!/bin/bash
set -e
create_kv() {
echo Creating kvstore machine.
docker-machine create -d virtualbox \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
kvstore
docker $(docker-machine config kvstore) run -d \
-p "8500:8500" \
progrium/consul --server -bootstrap-expect 1
}
create_master() {
echo Creating cluster master
kvip=$(docker-machine ip kvstore)
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery="consul://${kvip}:8500" \
--engine-opt="cluster-store=consul://${kvip}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
swarm-manager
}
create_nodes(){
kvip=$(docker-machine ip kvstore)
echo Creating cluster nodes
for i in 1 2; do
docker-machine create -d virtualbox \
--swarm \
--swarm-discovery="consul://${kvip}:8500" \
--engine-opt="cluster-store=consul://${kvip}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
swarm-node${i}
done
}
teardown(){
docker-machine rm kvstore -y
docker-machine rm -y swarm-manager
for i in 1 2; do
docker-machine rm -y swarm-node${i}
done
}
case $1 in
up)
create_kv
create_master
create_nodes
;;
down)
teardown
;;
*)
echo "Unknow command..."
exit 1
;;
esac
運行./cluster.sh up
,就能自動生成四台主機:
- 一台 kvstore 運行 consul 服務。
- 一台 swarm master 機器,運行 swarm manager 服務。
- 兩台 swarm node 機器,都是運行了 swarm node 服務和 docker daemon 服務。
查看四台主機的具體信息:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
kvstore - virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
swarm-manager * (swarm) virtualbox Running tcp://192.168.99.101:2376 swarm-manager (master) v18.03.1-ce
swarm-node1 - virtualbox Running tcp://192.168.99.102:2376 swarm-manager v18.03.1-ce
swarm-node2 - virtualbox Running tcp://192.168.99.103:2376 swarm-manager v18.03.1-ce
接下來驗證集群是否正確安裝?在主機上運行下面命令(主機,不是 Docker 主機):
$ eval $(docker-machine env --swarm swarm-manager)
$ docker info
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 5
Server Version: swarm/1.2.8
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint, whitelist
Nodes: 3
swarm-manager: 192.168.99.101:2376
└ ID: K6WX:ZYFT:UEHA:KM66:BYHD:ROBF:Z5KG:UHNE:U37V:4KX2:S5SV:YSCA|192.168.99.101:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:20:39Z
└ ServerVersion: 18.03.1-ce
swarm-node1: 192.168.99.102:2376
└ ID: RPRC:AVBX:7CBJ:HUTI:HI3B:RI6B:QI6O:M2UQ:ZT2I:HZ6J:33BL:HY76|192.168.99.102:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:21:09Z
└ ServerVersion: 18.03.1-ce
swarm-node2: 192.168.99.103:2376
└ ID: MKQ2:Y7EO:CKOJ:MGFH:B77S:3EWX:7YJT:2MBQ:CJSN:XENJ:BSKO:RAZP|192.168.99.103:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:21:06Z
└ ServerVersion: 18.03.1-ce
Plugins:
Volume:
Network:
Log:
Swarm:
NodeID:
Is Manager: false
Node Address:
Kernel Version: 4.9.93-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.063GiB
Name: 85be09a37044
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Live Restore Enabled: false
WARNING: No kernel memory limit support
可以看到集群的具體信息。
然后,接下來在主機上創建 overlay 網絡,創建命令:
$ docker network create -d overlay net1
d6a8a22298485a044b19fcbb62033ff1b4c3d4bd6a8a2229848
然后我們查看剛創建名稱為net1
的 overlay 網絡,命令:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
d6a8a2229848 net1 overlay global
9c7f0e793838 swarm-manager/bridge bridge local
93787d9ba7ed swarm-manager/host host local
72fd1e63889e swarm-manager/none null local
c73e00c4c76c swarm-node1/bridge bridge local
95983d8f1ef1 swarm-node1/docker_gwbridge bridge local
a8a569d55cc9 swarm-node1/host host local
e7fa8403b226 swarm-node1/none null local
7f1d219b5c08 swarm-node2/bridge bridge local
e7463ae8c579 swarm-node2/docker_gwbridge bridge local
9a1f0d2bbdf5 swarm-node2/host host local
bea626348d6d swarm-node2/none null local
接下來,我們創建兩個容器(主機上執行,使用 Docker Swarm 很方便),測試使用net1
網絡,是否可以相互訪問,創建命令:
$ docker run -d --net=net1 --name=c1 -e constraint:node==swarm-node1 busybox top
dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a
$ docker run -d --net=net1 --name=c2 -e constraint:node==swarm-node2 busybox top
583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d
查看剛創建的容器運行情況:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
583fde42182a busybox "top" 3 seconds ago Up 2 seconds swarm-node2/c2
dab080b33e76 busybox "top" 18 seconds ago Up 18 seconds swarm-node1/c1
接下來,我們查看net1
網絡的具體詳情:
$ docker network inspect net1
[
{
"Name": "net1",
"Id": "d6a8a2229848d40ce446f8f850a0e713a6c88a9b43583cc463f437f306724f28",
"Created": "2018-05-08T09:21:42.408840755Z",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d": {
"Name": "c2",
"EndpointID": "b7fcb0039ab21ff06b36ef9ba008c324fabf24badfe45dfa6a30db6763716962",
"MacAddress": "",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
},
"dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a": {
"Name": "c1",
"EndpointID": "8a80a83230edfdd9921357f08130fa19ef0b917bc4426aa49cb8083af9edb7f6",
"MacAddress": "",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
可以看到,我們剛剛創建的兩個容器信息(包含 IP 地址),也在里面。
然后我們測試下兩個容器是否可以相互訪問(直接 ping 容器名稱,也可以訪問):
$ docker exec c1 ping -c 3 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.903 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.668 ms
64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.754 ms
--- 10.0.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.668/0.775/0.903 ms
$ docker exec c2 ping -c 3 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.827 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.702 ms
64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.676 ms
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.676/0.735/0.827 ms
$ docker exec c2 ping -c 3 c1
PING c1 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=1.358 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.663 ms
64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.761 ms
--- c1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.663/0.927/1.358 ms
另附 Docker Swarm 架構圖:

參考資料: