前提:已部署好docker服務
服務預計部署情況如下
10.0.0.134 Consul服務
10.0.0.135 host1 主機名mcw5
10.0.0.134 host2 主機名mcw6
host1與host2通過Consul這個key-value數據庫,來報錯網絡狀態信息,用於跨主機容器間通信。包括Network、Endpoint、IP等。其它數據庫還可以使用Etcd,Zookeeper
拉取鏡像,運行容器
[root@mcw4 ~]$ docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap Unable to find image 'progrium/consul:latest' locally latest: Pulling from progrium/consul Image docker.io/progrium/consul:latest uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/ c862d82a67a2: Already exists 0e7f3c08384e: Already exists 0e221e32327a: Already exists 09a952464e47: Already exists 60a1b927414d: Already exists 4c9f46b5ccce: Already exists 417d86672aa4: Already exists b0d47ad24447: Pull complete fd5300bd53f0: Pull complete a3ed95caeb02: Pull complete d023b445076e: Pull complete ba8851f89e33: Pull complete 5d1cefca2a28: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest 3fc6e630abc28f4add16eb8448fa069dfd52eab0c2b1b6a4f0f5cbb6311f4f9f [root@mcw4 ~]$ cat /etc/docker/daemon.json #由於拉取鏡像總是各種卡住不動,於是添加了后兩個鏡像加速地址,重啟docker之后,很快就拉取成功了 {"registry-mirrors":["https://reg-mirror.qiniu.com/","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/"]}
瀏覽器上訪問Consul
訪問url:http://10.0.0.134:8500/
修改host1,host2的docker daemon配置文件
最終也就是如下:
容器連接端:ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376
consul服務端:ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0 -H fd:// --containerd=/run/containerd/containerd.sock
不過服務端設置成了swarm的管理端了,不太確定是否一定要這樣設置,詳情如下:
vim /usr/lib/systemd/system/docker.service 原來的: ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 改成了: ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluser-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376 --cluser-store : 指定consul的地址 --cluster-advertise:告知consul自己的地址 [root@host1 ~]$ ip a|grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 10.0.0.135/24 brd 10.0.0.255 scope global ens33 重啟docker daemon 剛剛consul://10.0.0.134:8500 的ip寫錯了,寫成host1的ip10.0.0.135了,docker daemon起不來 [root@host1 ~]$ systemctl daemon-reload [root@host1 ~]$ systemctl start docker.service [root@host1 ~]$ ps -ef|grep docker root 54038 1 0 13:12 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375 root 54229 52492 0 13:13 pts/0 00:00:00 grep --color=auto docker [root@host1 ~]$ grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluser-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376
在consul頁面上查看兩個host是否注冊進來了,結果是沒有
上面的ip,是consul容器ip
[root@mcw4 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fc6e630abc2 progrium/consul "/bin/start -server …" 34 minutes ago Up 34 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, :::8500->8500/tcp consul [root@mcw4 ~]$ docker exec -it consul hostname -i 172.17.0.2 ---- consul主機的docker服務加個參數-H tcp://0.0.0.0:2376,結果host1的docker服務重啟,起不來了 [root@mcw4 ~]$ grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock [root@mcw4 ~]$ vim /usr/lib/systemd/system/docker.service [root@mcw4 ~]$ grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 [root@mcw4 ~]$ systemctl daemon-reload [root@mcw4 ~]$ systemctl restart docker.service [root@mcw4 ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fc6e630abc2 progrium/consul "/bin/start -server …" 40 minutes ago Exited (1) 47 seconds ago consul [root@mcw4 ~]$ docker start 3fc 3fc 我靠,又是手打寫錯了cluster-store寫成了cluser-store ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluser-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376 應為: ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376 修改之后重啟docker daemon [root@host1 ~]$ vim /usr/lib/systemd/system/docker.service [root@host1 ~]$ grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://10.0.0.134:8500 --cluster-advertise=ens33:2376 [root@host1 ~]$ systemctl daemon-reload [root@host1 ~]$ systemctl restart docker.service 發現consul頁面上還是沒有多出來host1 把之前的consul機子上的配置,再改成加-H tcp://0.0.0.0:2376 ,運行其它主機來訪問的方式,然后重啟docker daemon consul主機上添加了,但是還是沒有主機注冊進來 [root@mcw4 ~]$ vim /usr/lib/systemd/system/docker.service [root@mcw4 ~]$ systemctl daemon-reload [root@mcw4 ~]$ systemctl restart docker.service [root@mcw4 ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fc6e630abc2 progrium/consul "/bin/start -server …" 59 minutes ago Exited (1) 26 seconds ago consul [root@mcw4 ~]$ docker start 3fc 3fc [root@mcw4 ~]$ grep ExecStart /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 上面的容器棄用了,用下面的 [root@mcw4 ~]$ systemctl daemon-reload [root@mcw4 ~]$ systemctl restart docker.service [root@mcw4 ~]$ docker run -d --network host -h consul --name=consul -p 8500:8500 --restart=always -e CONSUL_BIND_INTERFACE=ens33 consul Unable to find image 'consul:latest' locally latest: Pulling from library/consul 5758d4e389a3: Pull complete 57a5fd22f94c: Pull complete f7e2614f51b4: Pull complete e98e494e7397: Pull complete 35e8cfc01eae: Pull complete ea1f421022a9: Pull complete Digest: sha256:05d70d30639d5e0411f92fb75dd670ec1ef8fa4a918c6e57960db1710fd38125 Status: Downloaded newer image for consul:latest docker: Error response from daemon: Conflict. The container name "/consul" is already in use by container "3fc6e630abc28f4add16eb8448fa069dfd52eab0c2b1b6a4f0f5cbb6311f4f9f". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'. [root@mcw4 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@mcw4 ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fc6e630abc2 progrium/consul "/bin/start -server …" 2 hours ago Exited (1) 4 minutes ago consul [root@mcw4 ~]$ docker run -d --network host -h consul --name=consul2 -p 8500:8500 --restart=always -e CONSUL_BIND_INTERFACE=ens33 consul WARNING: Published ports are discarded when using host network mode #使用consuls2 f425838bfa074977cbbfe98d5c9cc4267aa9b89baa4deb9b19d3997f33134129 [root@mcw4 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f425838bfa07 consul "docker-entrypoint.s…" 18 seconds ago Up 16 seconds consul2
點進去135
創建網絡
[root@mcw4 ~]$ docker network create -d overlay mcw_ov Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. 解決方案: 執行:docker swarm init 報錯: [root@mcw4 ~]$ docker swarm init Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.0.134 on ens33 and 172.16.1.134 on ens37) - specify one with --advertise-addr 解決方案: 加參數指定網卡 [root@mcw4 ~]$ docker swarm init --advertise-addr ens33 Swarm initialized: current node (shd9o6v628yuodd068hhqtior) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-2x07y8n9ifzgvh2qbxv07xjjoyivowzxn65vhhk00x5a81vcd3-112p82a3lj77jy1s04n2n156f 10.0.0.134:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 還有提示信息,接着做吧 [root@mcw4 ~]$ docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-2x07y8n9ifzgvh2qbxv07xjjoyivowzxn65vhhk00x5a81vcd3-aios5d88jiiq7ikgl44erbuno 10.0.0.134:2377 [root@mcw4 ~]$ 應該是不用接着做了 [root@mcw4 ~]$ docker swarm join --token SWMTKN-1-2x07y8n9ifzgvh2qbxv07xjjoyivowzxn65vhhk00x5a81vcd3-aios5d88jiiq7ikgl44erbuno 10.0.0.134:2377 Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one. [root@mcw4 ~]$ docker network create -d overlay mcw_ov #成功創建網絡 rt8w4u0kjsrkmnf6t34ququwg [root@mcw4 ~]$ docker network ls #查看網絡,mcw_ov這個是我創建的,這個docker_gwbridge不知啥時候創建的,但是是相關的 NETWORK ID NAME DRIVER SCOPE 10494d6bd248 bridge bridge local 80955ec61abc docker_gwbridge bridge local 52042c49021b host host local r9ugyxc34mrm ingress overlay swarm rt8w4u0kjsrk mcw_ov overlay swarm fe4771ca21b4 none null local [root@mcw4 ~]$ docker network inspect mcw_ov #查看我創建的overlay網絡mcw_ov [ { "Name": "mcw_ov", "Id": "rt8w4u0kjsrkmnf6t34ququwg", "Created": "2022-01-02T09:43:13.815459229Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.1.0/24", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": null, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": null } ] [root@mcw4 ~]$ docker network inspect docker_gwbridge #查看另一個網絡 [ { "Name": "docker_gwbridge", "Id": "80955ec61abcca7ddd3a99323758c21ce142c050d2c3384e0245d923ed638611", "Created": "2022-01-02T17:40:02.262740556+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "ingress-sbox": { "Name": "gateway_ingress-sbox", "EndpointID": "119f4bb7ff7127b6f724fcf185efa8d61ac5d399e26ec170cb2721d925dace1d", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.enable_icc": "false", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.name": "docker_gwbridge" }, "Labels": {} } ] [root@mcw4 ~]$
鍵值對中可以看到多出來的網絡
驗證
在host1上創建一個overlay網絡,顯示是全局的,-d指定網絡類型為overlay
然后在host2上查看網絡,發現多了這個在host1創建的網絡。再往上一個的網絡創建是在consul上創建的。host2從consul讀取到了host1創建的新網絡數據,之后的任何變化,都會同步到host1,2
consul頁面鍵值對那里好像又多東西了,
在host2上查看這個網絡的詳細信息
IPAM是ip地址管理,docker自動為這個網絡分配的ip空間為10.0.0.0/24
在overlay中運行容器
運行
前面在host1(mcw5)上創建了overlay網絡了,並且在host2(mcw6)上能查看到這個網絡,網絡顯示是global
現在在host2上創建運行一個容器
[root@mcw6 ~]$ docker network ls #host1上創建好overlay網絡后,host2能查到這網絡, NETWORK ID NAME DRIVER SCOPE #但此時並沒看到docker_gwbridge這個網絡生成 473bc155c97e bridge bridge local ae6453e8c347 host host local c06b7147cd9f mcw_ov2 overlay global a786aec3f27c none null local [root@mcw6 ~]$ docker run -itd --name mcwbbox1 --network mcw_ov2 busybox #以創建的overlay網絡運行一個容器 Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 5cc84ad355aa: Pull complete Digest: sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678 Status: Downloaded newer image for busybox:latest 26e7581618c94fbf5afa207d8bd9f9444b37b852048b2d5d8454489b433463a2 [root@mcw6 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 26e7581618c9 busybox "sh" About a minute ago Up 48 seconds mcwbbox1 [root@mcw6 ~]$ docker network ls #當host2上用這個網絡運行一個容器后,就存在docker_gwbridge這個網絡了 NETWORK ID NAME DRIVER SCOPE 473bc155c97e bridge bridge local f3a7e89aa10e docker_gwbridge bridge local #docker會創建一個橋接網絡docker_gwbridge,為所有連接到overlay網絡 ae6453e8c347 host host local #的容器提供訪問外網的能力 c06b7147cd9f mcw_ov2 overlay global a786aec3f27c none null local [root@mcw6 ~]$
查看一下這兩個網卡
[root@mcw6 ~]$ docker network inspect mcw_ov2 [ { "Name": "mcw_ov2", "Id": "c06b7147cd9f4e1be34d6898b59e0413226b4de20fe7117e0be3ba428e8435ca", "Created": "2022-01-02T17:58:47.562758385+08:00", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "26e7581618c94fbf5afa207d8bd9f9444b37b852048b2d5d8454489b433463a2": { "Name": "mcwbbox1", "EndpointID": "cf63df6e9d160d3debdcffd6ae6745ab8d50f651d1ca4998d5b60e6f2b50f9a0", "MacAddress": "02:42:0a:00:00:02", "IPv4Address": "10.0.0.2/24", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ] [root@mcw6 ~]$ docker network inspect docker_gwbridge [ { "Name": "docker_gwbridge", "Id": "f3a7e89aa10ecf78a6b5d602ec7c79471200abb536ae40ddec1ba834dd03f5c9", "Created": "2022-01-02T18:17:38.506568799+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "26e7581618c94fbf5afa207d8bd9f9444b37b852048b2d5d8454489b433463a2": { "Name": "gateway_54c5dfdcb4cb", "EndpointID": "dd11b6dc9bec0135db362caa3a5a09b5dbc24ce99bc34086ed889ebf3eeebc84", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.enable_icc": "false", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.name": "docker_gwbridge" }, "Labels": {} } ] [root@mcw6 ~]$
查看生成容器的網卡
查看宿主機的docker網關橋接網卡,ip就是網關。網橋docker_gwbridge,通外網驗證,外網訪問驗證
這樣容器就可以通過這個網橋訪問外網
如果外網要訪問容器,可以通過主機端口映射,比如
docker run -p 80:80 -d --net mcw_ov --name web1 httpd
這里網絡可以通外網的,但是容器和宿主機 的域名服務ip沖突了,可能是這個導致無法解析域名
前面由於沒有開啟ipv4轉發,容器網絡不通外網
echo "net.ipv4.ip_forward = 1">>/etc/sysctl.conf
sysctl -p
overlay網絡連通性(單台host,多個host之間,ip,dns)
由下可知,overlay網絡中的容器可以直接通信,docker也實現了DNS服務
單台host中
[root@mcw6 ~]$ docker run -itd --name mcwbox61 --network mcw_ov2 busybox #以overlay網絡在host2上創建容器mcwbox61 7b2a29d00efd2501de6505eb663bd83f7aa6a38adaca341934ddca3ed4480593 [root@mcw6 ~]$ docker exec mcwbox61 ip r default via 172.18.0.1 dev eth1 10.0.0.0/24 dev eth0 scope link src 10.0.0.3 172.18.0.0/16 dev eth1 scope link src 172.18.0.3 [root@mcw6 ~]$ [root@mcw6 ~]$ docker run -itd --name mcwbox62 --network mcw_ov2 busybox #以overlay網絡在host2上創建容器mcwbox62 76fc1a6d0fac9eff697bce3b360fb0c92c84c1149c83d8f89e5a8f9517837369 [root@mcw6 ~]$ docker exec mcwbox62 ip r default via 172.18.0.1 dev eth1 10.0.0.0/24 dev eth0 scope link src 10.0.0.4 172.18.0.0/16 dev eth1 scope link src 172.18.0.4 [root@mcw6 ~]$ docker exec mcwbox62 ping -c 1 mcwbox61 #同主機,同overlay網絡直接網絡是連通的,可通過dns服務互訪 PING mcwbox61 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.311 ms --- mcwbox61 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.311/0.311/0.311 ms [root@mcw6 ~]$
多個host之間的容器
[root@mcw5 ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@mcw5 ~]$ [root@mcw5 ~]$ docker run -itd --name mcwbox51 --network mcw_ov2 busybox #以overlay網絡在host1上創建容器mcwbox51 a4c6674fae1696df44e125108238a1758f4196683cb96e1e889784e59bd39f37 [root@mcw5 ~]$ docker exec a4c ip r #查看容器ip default via 172.18.0.1 dev eth1 10.0.0.0/24 dev eth0 scope link src 10.0.0.5 172.18.0.0/16 dev eth1 scope link src 172.18.0.2 [root@mcw5 ~]$ docker exec a4c ping -c 2 10.0.0.4 #檢驗host1中容器到host2中容器mcwbox62的連通性,通過ip PING 10.0.0.4 (10.0.0.4): 56 data bytes 64 bytes from 10.0.0.4: seq=0 ttl=64 time=1065.117 ms 64 bytes from 10.0.0.4: seq=1 ttl=64 time=65.441 ms --- 10.0.0.4 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 65.441/565.279/1065.117 ms [root@mcw5 ~]$ docker exec a4c ping -c 2 mcwbox61 #檢驗host1中容器到host2中容器mcwbox61的連通性,通過主機名 PING mcwbox61 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=1003.939 ms 64 bytes from 10.0.0.3: seq=1 ttl=64 time=3.240 ms --- mcwbox61 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 3.240/503.589/1003.939 ms
網絡命名空間查看
[root@mcw6 ~]$ ip netns #執行命令查看docker網絡命令空間,由於沒有執行下面的軟鏈接,所以沒有結果 [root@mcw6 ~]$ ln -s /var/run/docker/netns /var/run/netns #添加軟鏈接 [root@mcw6 ~]$ ip netns #然后執行命令,查看docker命名空間, 2415736b51bf (id: 3) 23437c6848c8 (id: 2) 54c5dfdcb4cb (id: 1) 1-c06b7147cd (id: 0) #這個是mcw_ov2的namespace,這里host2中查看,也可以在host1中查看到這個namespace [root@mcw6 ~]$ ip netns exec 1-c06b7147cd brctl show #沒裝命令,需要安裝包bridge-utils exec of "brctl" failed: No such file or directory [root@mcw6 ~]$ ip netns exec 1-c06b7147cd brctl show #查看docker為overlay網絡 bridge name bridge id STP enabled interfaces #創建的獨立的網絡命名空間,其中有 br0 8000.1a743dca37a1 no veth0 #一個就是bridge bro,endpoint由veth pair實現, · veth1 #一端鏈接到容器中(即eth0),另一端連接到 veth2 #br0上,br0除了連接所有endpoint,還會連接一個vxlan設備 vxlan0 #用於與其它host建立vxlan tunnerl,容器直接就是通過這個tunnel通信 [root@mcw6 ~]$ ip netns exec 1-c06b7147cd ip -d l show vxlan0 #查看這個網絡namespace中br0上的設備 6: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT link/ether ea:17:50:46:d6:e6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 #其中含有vxlan id bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.1a:74:3d:ca:37:a1 designated_root 8000.1a:74:3d:ca:37:a1 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on addrgenmode eui64 [root@mcw6 ~]$ docker ps -a #查看這台host上有哪些容器 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76fc1a6d0fac busybox "sh" 52 minutes ago Up 52 minutes mcwbox62 7b2a29d00efd busybox "sh" 54 minutes ago Up 54 minutes mcwbox61 26e7581618c9 busybox "sh" 2 hours ago Up 2 hours mcwbbox1
overlay網絡隔離以及打通兩個overlay之間的網絡
[root@mcw5 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a4c6674fae16 busybox "sh" 2 hours ago Up 2 hours mcwbox51 [root@mcw5 ~]$ docker network create -d overlay mcw_ov3 #在host1上創建新的網絡mcw_ov3, 0ca479e2cef7f21697f2a829bb2ccdcc7098f9a088bf6d5bf72959b0d30848ae [root@mcw5 ~]$ docker run -itd --name mcwbox52 --network mcw_ov3 busybox #以新網絡運行容器 eedf850604b4f7a52ca241fa4e20502d014224a1e2d49f74c3d80a1cfb202201 [root@mcw5 ~]$ docker exec -it mcwbox52 ip r #查看容器mcwbox52分配的ip10.0.1.2 default via 172.18.0.1 dev eth1 10.0.1.0/24 dev eth0 scope link src 10.0.1.2 172.18.0.0/16 dev eth1 scope link src 172.18.0.3 [root@mcw5 ~]$ docker exec -it mcwbox52 ping -c 2 10.0.0.3 #從這個第二個overlay下的容器ping第一個overlay下的容器, PING 10.0.0.3 (10.0.0.3): 56 data bytes #網絡不通。驗證了不同overlay下網絡是不通的 --- 10.0.0.3 ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss [root@mcw5 ~]$ [root@mcw5 ~]$ docker network connect mcw_ov2 mcwbox52 #如果想讓第二個overlay下的容器mcwbox52能通第一個overlay下的容器 [root@mcw5 ~]$ docker exec -it mcwbox52 ping -c 2 10.0.0.3 #那么使用上面的連接命令,即可實現打通網絡 PING 10.0.0.3 (10.0.0.3): 56 data bytes 64 bytes from 10.0.0.3: seq=0 ttl=64 time=34.884 ms ^C --- 10.0.0.3 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 34.884/34.884/34.884 ms [root@mcw5 ~]$ docker exec -it mcwbox52 ping -c 1 mcwbox62 #ping主機名 PING mcwbox62 (10.0.0.4): 56 data bytes 64 bytes from 10.0.0.4: seq=0 ttl=64 time=42.705 ms --- mcwbox62 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 42.705/42.705/42.705 ms [root@mcw5 ~]$
overlay IPAM 指定子網范圍
指定ip空間
docker network create -d overlay --subnet 10.22.1.0/24 ov_net3