在實踐中,經常會碰到需要多個服務組件容器(lnmp)共同協作的情況,這往往需要多個容器之間能夠互相訪問到對方的服務。
可以通過以下兩種方式來是想容器互聯互通:
1 端口映射實現容器訪問
通過-p參數來指定端口映射。
當用-P(大寫)標記時,docker會隨機映射一個端口到內部容器開放的網絡端口
2 互聯機制實現互訪
使用--link 參數可以讓容器之間進行互聯
端口映射實現容器訪問
1: -P(大寫)隨機端口映射,默認映射到所有地址0.0.0.0
[root@server01 ~]# docker run -d -P --name mynginx --rm nginx
5c7f907176712c9c175f055ef375ed5bd140d8700a823131b3c15c35b1346448
[root@server01 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c7f90717671 nginx "nginx -g 'daemon of…" 8 seconds ago Up 4 seconds 0.0.0.0:32768->80/tcp mynginx
2 -p 映射到指定地址的指定端口
[root@server01 ~]# docker run -d -p 192.168.1.10:80:80 --name mynginx nginx 0fde3f18a5d70ff8a6c34754e0cbe971a6290fe28b433da0d040b1e8759c9e3c [root@server01 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0fde3f18a5d7 nginx "nginx -g 'daemon of…" 4 seconds ago Up 2 seconds 192.168.1.10:80->80/tcp mynginx
3 查看端口映射情況
[root@server01 ~]# docker port mynginx 80/tcp -> 192.168.1.10:80
互聯機制實現互訪
1 創建一個數據庫容器
[root@server01 ~]# docker run -d --name db -e MYSQL_ROOT_PASSWORD=123456 mysql
83cf1e76e7087e10aa7f7abf1c1854a054669fdf1440e4c022a1223c25f5326a
[root@server01 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83cf1e76e708 mysql "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 3306/tcp, 33060/tcp db
2 開啟一個web容器,並將它連接到db容器
vim Dockerfile
FROM centos:7 RUN yum -y install epel-release && yum -y install nginx CMD ["/usr/sbin/nginx", "-g","daemon off;"]
docker build -t myweb:v1 .
--link name:alias
其中 name是要鏈接的容器名稱,alias是別名
[root@server01 ~]# docker run -d --name web --link db:db myweb:v1
2fd412677f62e023581ac7961f43b2cfb4c89c68461b8ce35fe36ff2afd054ff
[root@server01 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2fd412677f62 nginx "nginx -g 'daemon of…" 5 seconds ago Up 5 seconds 80/tcp web
docker相當於在兩個互聯的容器之間創建了一個虛擬通道,而且不用映射他們的端口到宿主機上。避免了暴露數據庫端口到外部網絡
3 docker剛過兩種方式為容器公開鏈接信息:
3.1 更新/etc/hosts文件
[root@server01 ~]# docker exec -it web /bin/bash root@2fd412677f62:/# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 db 83cf1e76e708 172.17.0.3 2fd412677f62
3.2 更新環境變量
其中DB_開頭的環境變量是供web容器鏈接db容器使用的
[root@server01 ~]# docker exec -it web /bin/bash root@2fd412677f62:/# env DB_PORT_33060_TCP_ADDR=172.17.0.2 DB_PORT_3306_TCP_PORT=3306 HOSTNAME=2fd412677f62 PWD=/ DB_PORT_33060_TCP_PORT=33060 DB_PORT_3306_TCP_ADDR=172.17.0.2 DB_PORT=tcp://172.17.0.2:3306 DB_PORT_3306_TCP_PROTO=tcp PKG_RELEASE=1~buster HOME=/root DB_PORT_33060_TCP=tcp://172.17.0.2:33060 DB_ENV_MYSQL_MAJOR=8.0 DB_ENV_MYSQL_VERSION=8.0.19-1debian10 DB_PORT_33060_TCP_PROTO=tcp NJS_VERSION=0.3.9 TERM=xterm SHLVL=1 DB_ENV_GOSU_VERSION=1.7 DB_PORT_3306_TCP=tcp://172.17.0.2:3306 DB_NAME=/web/db PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NGINX_VERSION=1.17.9 _=/usr/bin/env
3.3 登錄到web容器驗證與db容器互聯
[root@76d52e07299f /]# telnet 172.17.0.2 3306
Trying 172.17.0.2...
Connected to 172.17.0.2.
Escape character is '^]'.
J
2ÿEYa_]d0%Xcaching_sha2_password
了解docker網絡啟動過程:
1: docker服務啟動時會在主機上自動創建一個docker0的虛擬網橋。
網橋可以理解為一個軟件交換機,負責為掛載在網橋上的接口之間進行數據包的轉發
[root@server01 ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:ffff:fe1b:98ca prefixlen 64 scopeid 0x20<link> ether 02:42:ff:1b:98:ca txqueuelen 0 (Ethernet) RX packets 11564 bytes 639940 (624.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11818 bytes 31513043 (30.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2:docker隨機分配一個bending未占用的私有地址給docker0接口(172.17/0/0/16,掩碼為255.255.0.0)
啟動的容器內的網卡也會自動分配一個該網段的地址。
[root@server01 ~]# docker inspect -f {{.NetworkSettings.IPAddress}} db 172.17.0.2 [root@server01 ~]# docker inspect -f {{.NetworkSettings.IPAddress}} web 172.17.0.3
3:創建一個docker容器的時候,同時會創建了一對veth pair 互聯接口。
互連接口的一端位於容器內,即eth0,另一端在本地並被掛載到docker0網橋,名稱已veth開頭。
web容器內部網絡信息:
[root@76d52e07299f /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.3 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:03 txqueuelen 0 (Ethernet) RX packets 219 bytes 396248 (386.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 286 bytes 19282 (18.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@76d52e07299f /]# yum -y install net-tools
宿主機網絡信息:
veth62aa061: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::7012:91ff:feee:9980 prefixlen 64 scopeid 0x20<link> ether 72:12:91:ee:99:80 txqueuelen 0 (Ethernet) RX packets 286 bytes 19282 (18.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 219 bytes 396248 (386.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
通過這種方式主機可以與容器通信,容器之間也可以互相通信!
手動配置docker網絡
1 啟動一個不加載網絡的容器box1
[root@server01 src]# docker run -it --privileged=true --net=none --name box1 busybox:latest /bin/sh / #
2 在本地主機查找容器的進程ID,並為他創建網絡命名空間:
[root@server01 src]# docker inspect -f {{.State.Pid}} box1 111828
1102 pid=$(docker inspect -f {{.State.Pid}} box1) 1103 echo $pid 1104 mkdir -p /var/run/netns/ 1105 ln -s /proc/$pid/ns/net /var/run/netns/$pid
3 檢查橋接網卡的ip和子網掩碼信息
[root@server01 src]# ip addr show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ff:1b:98:ca brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:ffff:fe1b:98ca/64 scope link
valid_lft forever preferred_lft forever
4 創建一對“”veth pair“” 接口A和B 綁定A接口道網橋docker0,並啟用它:
1107 ip addr show docker0 1108 ip link add A type veth peer name B 1111 yum install bridge-utils 1112 brctl addif docker0 A 1113 ip link set A up 1114 ifconfig
5 將B接口放到容器的網絡命名空間,命名為eth0,啟動它並配置一個可用ip(橋接網段)和默認網關
[root@server01 src]# ip link set B netns $pid [root@server01 src]# ip netns exec $pid ip link set dev B name eth0 [root@server01 src]# ip netns exec $pid ip link set eth0 up [root@server01 src]# ip netns exec $pid ip addr add 172.17.0.110/16 dev eth0 [root@server01 src]# ip netns exec $pid ip route add default via 172.17.0.1
6 登錄容器box驗證
[root@server01 ~]# docker run -it --privileged=true --net=none --name box1 busybox:latest /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 120: eth0@if121: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000 link/ether 56:b3:c4:48:3a:ea brd ff:ff:ff:ff:ff:ff inet 172.17.0.110/16 scope global eth0 valid_lft forever preferred_lft forever / # ping baidu.com PING baidu.com (39.156.69.79): 56 data bytes 64 bytes from 39.156.69.79: seq=0 ttl=48 time=37.637 ms 64 bytes from 39.156.69.79: seq=1 ttl=48 time=37.520 ms
使用OpenvSwitch網橋
docker默認使用的是linux自帶的網橋docker0實現互聯互通,可以替換為功能更強大的Openv-Switch虛機交換機實現
關閉selinux
setenforce 0
安裝依賴包
[root@server01 ~]# yum -y install rpm-build openssl-devel
1 安裝OpenvSwith
1.1 獲取安裝文件
wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz
1.2 添加用戶ovs
[root@server01 src]# useradd ovs useradd:用戶“ovs”已存在 [root@server01 src]# id ovs uid=1001(ovs) gid=1001(ovs) 組=1001(ovs)
1.3 切換用戶
su - ovs
[ovs@server01 SOURCES]$ pwd
/home/ovs/rpmbuild/SOURCES
mkdir -p rpmbuild/SOURCES cp /usr/local/src/openvswitch-2.3.1.tar.gz rpmbuild/SOURCES/ cd rpmbuild/SOURCES/ tar zxvf openvswitch-2.3.1.tar.gz
1.4 修改文件
[ovs@server01 SOURCES]$ sed 's/openvswitch-kmod, //g' openvswitch-2.3.1/rhel/openvswitch.spec > openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
1.5 生成rpm文件
[ovs@server01 SOURCES]$ rpmbuild -bb --nocheck openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
當出現以下信息說明生成rpm文件成功
+ umask 022 + cd /home/ovs/rpmbuild/BUILD + cd openvswitch-2.3.1 + rm -rf /home/ovs/rpmbuild/BUILDROOT/openvswitch-2.3.1-1.x86_64 + exit 0
1.6 安裝rpm包
yum localinstall /home/ovs/rpmbuild/RPMS/x86_64/openvswitch-2.3.1-1.x86_64.rpm
1.7 檢查命令行工具是否准備就緒
[root@server01 x86_64]# openvt -V openvt 來自 kbd 1.15.5
1.8 啟動服務
[root@server01 x86_64]# /etc/init.d/openvswitch start Starting openvswitch (via systemctl): [ OK ] [root@server01 x86_64]# /etc/init.d/openvswitch status ovsdb-server is running with pid 87587 ovs-vswitchd is running with pid 87197 [root@server01 x86_64]#
3. 配置容器連接到OpenvSwitch網橋
3.1 ovs創建br0,並啟動兩個不加載網絡的docker容器(box1,box2)
--net=none 表示運行時不創建網絡
--privileged=true 表示在容器內可以獲取一些擴展權限。
[root@server01 ~]# ovs-vsctl add-br br0 [root@server01 ~]# ip link set br0 up
[root@server01 src]# ifconfig br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::5057:5bff:fef7:9c48 prefixlen 64 scopeid 0x20<link> ether 52:57:5b:f7:9c:48 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 586 (586.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@server01 ~]# docker run -it --privileged=true --net=none --name box1 busybox:latest /bin/sh Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 0669b0daf1fb: Pull complete Digest: sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 Status: Downloaded newer image for busybox:latest / # [root@server01 src]# docker run -it --privileged=true --net=none --name box2 busybox:latest /bin/sh / #
3.2 手動為容器添加網絡
下載openvswitch 項目提供的支持docker容器的輔助腳本 ovs-docer
wget https://github.com/openvswitch/ovs/raw/master/utilities/ovs-docker
3.3 將容器關聯網橋br0,並設置ip、vlan
[root@server01 src]# ./ovs-docker add-port br0 eth0 box1 --ipaddress=10.0.0.2/24 --gateway=10.0.0.1 [root@server01 src]# ./ovs-docker set-vlan br0 eth0 box1 5 [root@server01 src]# ./ovs-docker add-port br0 eth0 box2 --ipaddress=10.0.0.3/24 --gateway=10.0.0.1 [root@server01 src]# ./ovs-docker set-vlan br0 eth0 box2 5 [root@server01 src]#
添加成功后,在容器內查看網絡信息,多了一個etho的網卡
/ # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 108: eth0@if109: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000 link/ether 3e:b7:2d:6a:b7:44 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 scope global eth0 valid_lft forever preferred_lft forever
此時測試容器的互通性,兩容器相互ping測試
在box1終端ping box2容器
/ # ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.949 ms 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.109 ms
3.4 設置br0地址為網關地址
[root@server01 src]# ip addr add 10.0.0.1/24 dev br0
此時 在box1容器內 ping網關,ping box2 ping宿主機 都是通的
/ # ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1): 56 data bytes 64 bytes from 10.0.0.1: seq=0 ttl=64 time=1.136 ms 64 bytes from 10.0.0.1: seq=1 ttl=64 time=0.752 ms ^C --- 10.0.0.1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.752/0.944/1.136 ms / # ping 10.0.0.2 PING 10.0.0.2 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.083 ms / # ping 192.168.1.10 PING 192.168.1.10 (192.168.1.10): 56 data bytes 64 bytes from 192.168.1.10: seq=0 ttl=64 time=0.864 ms 64 bytes from 192.168.1.10: seq=1 ttl=64 time=0.111 ms ^C --- 192.168.1.10 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.111/0.487/0.864 ms
遺留問題:容器無法上外網!
構建跨主機容器網絡
在可以互相訪問的多個宿主機之間搭建隧道,讓容器可以跨主機互訪!
涉及到跨主機的網絡就需要使用docker自帶的overlay network或者第三方的網絡插件,
要想使用Docker原生Overlay網絡,需要滿足以下任意條件:
1、Docker運行在Swarm模式
2、使用鍵值存儲的Docker主機集群
本次部署使用鍵值存儲的Docker主機集群,需要滿足以下條件:
1. 集群中主機連接到鍵值存儲,Docker支持Consul、Etcd和Zookeeper;
2. 集群中主機運行一個Docker守護進程;
3. 集群中主機必須具有唯一的主機名,因為鍵值存儲使用主機名來標識集群成員;
4. 集群中Linux主機內核版本3.12+,支持VXLAN數據包處理,否則可能無法通信。
5. Docker通過overlay網絡驅動程序支持多主機容器網絡通信。
修改兩個測試端主機名,並退出終端生效
[root@overlay01 ~]# hostnamectl set-hostname overlay01 [root@overlay02 ~]# hostnamectl set-hostname overlay02
1 配置網絡信息管理數據庫
consul服務搭建
Docker跨主機通信需要key value的服務來保存網絡的信息,有很多可以選擇的服務,如consul,etcd,zookeeper等都可以,本文是以官方推薦的consul服務作為key value的服務。
docker run -d --restart="always" --publish="8500:8500" --hostname="consul" --name="consul" index.alauda.cn/sequenceiq/consul:v0.5.0-v6 -server -bootstrap
國外的鏡像拉取很慢,選用了國內的靈雀雲作為服務。
2 配置主機的docker服務的啟動項(1.7和1.8都要配置)
1.--cluster-store=consul://192.168.1.7:8500 #內網的IP地址加上consul的端口
2.--cluster-advertise=192.168.1.7:2376 #以守護進程方式啟動
3:-H unix:///var/run/docker.sock 沒有這一行docker 客戶端命令都不能用 !
1.7的配置
/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.7:8500 --cluster-advertise=192.168.1.7:2376
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
1.8的配置
/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.7:8500 --cluster-advertise=192.168.1.8:2376
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
重啟docker服務
systemctl daemon-reload systemctl restart docker
驗證:
[root@overlay02 ~]# ps aux|grep docker root 6830 2.5 7.4 455508 74360 ? Ssl 14:22 0:00 /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --cluster-store=consul://192.168.1.7:8500 --cluster-advertise=192.168.1.8:2376 root 6985 0.0 0.0 112728 976 pts/0 S+ 14:22 0:00 grep --color=auto docker
3 創建overlay網絡
使用命令創建overlay類別的網絡
[root@overlay01 ~]# docker network create -d overlay wg 00651f61204bc1e464c8722b96df34c586d76cccbf716c6129be9f87440eb629 [root@overlay01 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 21405694cf9d bridge bridge local 718808af2e24 host host local 239cb40d03a6 none null local 00651f61204b wg overlay global [root@overlay01 ~]#
4 測試網絡
分別在1.7(box1)和1.8(box2)上創建一個容器
[root@overlay01 ~]# docker run -it --net=wg --name box1 busybox:latest /bin/sh Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 0669b0daf1fb: Pull complete Digest: sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 Status: Downloaded newer image for busybox:latest / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever 35: eth1@if36: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ping baidu.com PING baidu.com (39.156.69.79): 56 data bytes 64 bytes from 39.156.69.79: seq=0 ttl=48 time=37.749 ms
10.0.0.3 為容器box2的ip地址
/ # ping 10.0.0.3 PING 10.0.0.3 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.704 ms 64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.667 ms
===============================
consul 安裝完會有一個web界面,有興趣的可以研究下!