一、部署swarm集群
#docker swarm簡介
Docker Swarm 和 Docker Compose 一樣,都是 Docker 官方容器編排項目,但不同的是,Docker Compose 是一個在單個服務器或主機上創建多個容器的工具,而 Docker Swarm 則可以在多個服務器或主機上創建容器集群服務,對於微服務的部署,顯然 Docker Swarm 會更加適合。
從 Docker 1.12.0 版本開始,Docker Swarm 已經包含在 Docker 引擎中(docker swarm),並且已經內置了服務發現工具,我們就不需要像之前一樣,再配置 Etcd 或者 Consul 來進行服務發現配置了。
#swarm概念
Swram是Docker公司推出的官方容器集群平台,基於go語言實現,代碼開源在 https://github.com/docker/swarm .從2014年開始,到2018年項目終目開發,其中在2016年2月對架構進行重新設計,推出了v2版本,支持超過1千個節點。作為容器集群管理器,Swarm最大的優勢之一就是100%支持標准的Docker API及工具(如Compose,docker-py等),Docker本身就可以很好地與Swarm進行集成。
我們來看下Swarm做了什么?試想一下目前操作docker集群的方式,用戶必須單獨對每一個容器執行命令,如下圖所示:
有了Swarm后,使用多台Docker宿主機的方式就變成了如下圖的形式:
#swarm集群
從上圖可以看出,Swarm有兩個角色(Manager、agent(也可稱為worker)),簡單說一下這兩個角色的作用:
- Manager:接收客戶端服務定義,將任務發送到agnet節點,維護集群期望狀態和集群管理功能以及leader選舉。默認情況下manager節點也會運行任務,也可以配置只做管理任務。
- agent:接收並執行從管理節點分配的任務,並報告任務當前的狀態,以便Manager節點維護每個服務期望狀態。
從上圖還可以看出,Manager收到的請求可發細分為4大類:
- 第一類,針對已創建容器的操作,Swarm只是起到一個轉發請求到特定宿主機的作用。
- 第二類,針對Docker鏡像的操作。
- 第三類,創建新的容器docker create這一命令,其中涉及的集群調度會在下面的內容中分享;
- 第四類,其他獲取集群整體信息的操作,比如獲取所有容器信息、查看Docker版本等。
2.2swarm集群調度策略
Swarm管理了多台Docker宿主機,用戶在這些宿主機上創建容器時,究竟會與哪台宿主機產生交互呢?
2.2.1 Filter
Swarm的調度策略可以按照指定調度策略分配容器到節點,但是有時候希望能對這些分配加以干預。比如,讓I/O敏感的容器分配到安裝了SSD的節點上;讓計算敏感的容器分配到CPU核數多的機器上;讓網絡敏感的容器分配到高帶寬的機房上,等等。
這時可以通過過濾器(filter)來實現,,用來幫助用戶篩選出符合他們條件的宿主機。目前支持五種過濾器:
- Constraint
- Affinity
- Port
- Dependency
- Health
本文為大家介紹前兩種過濾器。
1、Constraint過濾器
Constraint過濾器是綁定到節點的鍵值對,相當於給節點添加標簽。可以在啟動Docker容器的時候指定,使用swarm啟動容器的時候,采用-e constarint:key=value的形式,可以過濾出匹配條件的節點。
下面用個場景為例,我們將分別啟動兩個容器busybox容器,分別使用過濾器打上紅色標簽、綠色標簽。
#docker service create --replicas 1 -e constraint:color=red busybox ping 127.0.0.1 #docker service create --replicas 1 -e constraint:color=green busybox ping 127.0.0.1
如果目前大家看着上面的命令不太理解,等看完了這篇文章之后,在回過頭來看上面的命令就會明白了。
2、Affinity過濾器
Affinity過濾器允許用戶在啟動一個容器的時候,讓它分配到某個已有容器的節點上。
2.2.2 strategy
使用了filter之后,Swarm還提供了strategy(策略)來選出最終運行容器的宿主機,目前swarm已經提供了有如下幾種策略:
- random策略:random就是在候選宿主機中(agnet中)隨機選擇一台的策略;
- binpacking策略:binpacking則會在權衡宿主機CPU和內存的占用率之后選擇能分配到最大資源的那台候選宿主機;
- spread策略:spread嘗試把每個容器平均地部署到每個節點上。
與此同時,調度策略支持對節點的信任機制,如果一個節點總是處於工作狀態,沒有失敗連接的情況,那么相同條件下,該節點將會優先被選擇。目前對宿主機資源的調度策略還在不斷開發中,使用filter和strategy這種組合方式,對容器的部署仍然保留了Docker宿主機粒度的操作,已能滿足大多數的需求了。
#docker swarm 體系結構如如所示:
一個Manager下面有多個Worker(實際運行中每個都是一個container)
下圖是一個Service和Replicas(復制品)模型圖, service是nginx,但是下面有3個replicas nginx構成了一個集群。
swarm集群部署
#測試環境:
https://labs.play-with-docker.com
用docker賬號登錄
#網絡規則
192.168.0.61 node1 #master節點
192.168.0.62 node2 #node節點
192.168.0.63 node3 #node節點
#初始化swarm集群 (備注:在node1節點上面操作)
sudo docker swarm init --advertise-addr 192.168.0.38
#將work節點加入swarm集群 (備注:在node2和node3節點上面操作)
sudo docker swarm join --token SWMTKN-1-3flgg8jgq9tmo3l3kazvit8fec0e91hea8dedvc691liswsqv8-3ipdbwzle03ogd5l5jxue0eo2 192.168.0.61:2377
#查看pod狀態(備注:在node1節點上面操作)
$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
hhkyr9oj2mkv1yplzd229zdwf node1 Ready Active 19.03.0-beta2
ndng4snylrdwhgr6dicww3fho node2 Ready Active 19.03.0-beta2
8vb0iln353y4am6sjul459tep * node3 Ready Active Leader 19.03.0-beta2
#查看幫助
$ sudo docker --help (備注:在node1節點上面操作)
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default "/root/.docker")
-c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default "/root/.docker/ca.pem")
省略部分......
#查看docker版本
$ docker version (備注:在node1節點上面操作)
Client: Docker Engine - Community
Version: 19.03.0-beta2
API version: 1.40
Go version: go1.12.4
Git commit: c601560
Built: Fri Apr 19 00:57:20 2019
OS/Arch: linux/amd64
Experimental: false
省略部分......
#顯示當前所有節點 (備注:在node1節點上面操作)
$ docker node ls
ID HOSTNAME STATUS
AVAILABILITY MANAGER STATUS ENGINE VERSION
hhkyr9oj2mkv1yplzd229zdwf node1 Ready
Active 19.03.0-beta2
ndng4snylrdwhgr6dicww3fho node2 Ready
Active 19.03.0-beta2
8vb0iln353y4am6sjul459tep * node3 Ready
Active Leader 19.03.0-beta2
#部署服務 (備注:在node1節點上面操作)
docker service create --name demo busybox sh -c "while true;do sleep 3600;done;"
#列出服務 (備注:在node1節點上面操作)
$ docker service ls
ID NAME MODE REPL
ICAS IMAGE PORTS
nfuka28cblpf demo replicated 1/1
busybox:latest
#查看服務 (備注:在node1節點上面操作)
$ docker service ps demo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
v2rjd6gru6ns demo.1 busybox:latest node2 Running Running 38 seconds ago
#擴展服務數量,就是增加副本數量 (備注:在node1節點上面操作)
$ docker service scale demo=5
demo scaled to 5
#查看副本數 (備注:在node1節點上面操作)
$ docker service ls
ID NAME MODE REPL
ICAS IMAGE PORTS
nfuka28cblpf demo replicated 5/5
busybox:latest
#查看demo副本數量(備注:在node1節點上面操作)
$ docker service ps demo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
v2rjd6gru6ns demo.1 busybox:latest node2 Running Running 8 minutes ago
nulr3x9qjt1m demo.2 busybox:latest node1 Running Running 4 minutes ago
yuxfv0drzmzw demo.3 busybox:latest node1 Running Running 4 minutes ago
8n2v2x3gpw8f demo.4 busybox:latest node2 Running Running 4 minutes ago
x997dqaiykj2 demo.5 busybox:latest node3 Running Running 4 minutes ago
#創建overlay網絡,使兩個容器之間實現互訪
$ docker network create -d overlay demo (備注:在node1節點上面操作)
lfhkp40mqlcy1r1fm164oonzo
[node3] (local) root@192.168.0.61 ~
$ docker network ls (備注:在node1節點上面操作)
NETWORK ID NAME DRIVER SCOP
E
cd64b2256d12 bridge bridge loca
l
lfhkp40mqlcy demo overlay swar
m
183d22dd1659 docker_gwbridge bridge loca
l
75f2650e955b host host loca
l
2s10tftmg4f4 ingress overlay swar
m
be44ba126b60 none null loca
l
#創建服務:mysql (備注:在node1節點上面操作)
docker service create --name mysql --env MYSQL_ROOT_PASSWORD=root \
--env MYSQL_DATABASE=wordpress --network demo --mount type=volume,source=mysql-data,destination=/var/lib/mysql mysql:5.7
$ docker service ls (備注:在node1節點上面操作)
ID NAME MODE REPLICAS IMAGE PORTS
nfuka28cblpf demo replicated 5/5 busybox:latest
md67phemw4up mysql replicated 1/1 mysql:5.7
$ docker service ps mysql (備注:在node1節點上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
a6zrrg4ij15j mysql.1 mysql:5.7 node3 Running Running about a minute ago
#創建wordpress服務 (備注:在node1節點上面操作)
docker service create --name wordpress -p 80:80 --env WORDPRESS_DB_PASSWORD=root \
--env WORDPRESS_DB_HOST=mysql --network demo wordpress
$ docker service ls (備注:在node1節點上面操作)
ID NAME MODE REPLICAS IMAGE PORTS
ye3iojuxvdwf mysql replicated 1/1 mysql:5.7
cku16jr5nfcg wordpress replicated 1/1 wordpress:latest *:80->80/tcp
$ docker service ps wordpress (備注:在node1節點上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
n16an94tcb0p wordpress.1 wordpress:latest node2 Running Running about a minute ago
$ docker ps (備注:在node1節點上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f394dee1e89 mysql:5.7 "docker-entrypoint.s…" 4 minutes ago Up 4 mi
nutes 3306/tcp, 33060/tcp mysql.1.volm1hp3n2dkljqtb8h59e8sf
#訪問網站 (備注:在外網進行訪問)
http://ip
二、集群服務間通信之RoutingMesh
通過:LVS實現的,VIP地址。
2.1、在node2節點上面查看,容器被分配到node2上面之后,才會分配網絡。
$ docker network ls (備注:在node1節點上面操作)
NETWORK ID NAME DRIVER SCOPE
1d3a7390993e bridge bridge local
oiy95kyiwybc demo overlay swarm
400751c7cbb6 docker_gwbridge bridge local
ed8a7fe3333e host host local
eftpkhpfwomq ingress overlay swarm
245e8b6ebb9f none null local
2.1、創建whoami服務(備注:在node1節點上面操作)
docker service create --name whoami -p 8000:8000 --network demo -d jwilder/whoami
#查看服務 (備注:在node1節點上面操作)
$ docker service ls
ID NAME MODE REPLICAS IMAGE
PORTS
ye3iojuxvdwf mysql replicated 1/1 mysql:5.7
ulsy2lxm6cng whoami replicated 1/1 jwilder/w
hoami:latest *:8000->8000/tcp
cku16jr5nfcg wordpress replicated 1/1 wordpress
:latest *:80->80/tcp
2.2、查看運行在那個節點
$ docker service ps whoami (備注:在node1節點上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERRO
R PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running Running 59 seconds ago
2.3、docker ps (備注:在node3節點上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
184989e17f76 jwilder/whoami:latest "/app/http" About a minute ago Up About a minute 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf
2.4、訪問whoami容器ip地址 (備注:在node1節點上面操作)
$ curl 127.0.0.1:8000
I'm 184989e17f76
2.5、創建服務busybox
$docker service create --name client -d --network demo busybox sh -c "while true; do sleep 3600; done"
we6jp0m72ss5m0vw9g36t2mmy
3.6、查看服務 (備注:在node1節點上面操作)
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS NAMES
PORTS 3306/tcp, 33060/tcp mysql.
we6jp0m72ss5 client replicated 1/1 busybox:latest
ye3iojuxvdwf mysql replicated 1/1 mysql:5.7
ulsy2lxm6cng whoami replicated 1/1 jwilder/whoami:latest
*:8000->8000/tcp
cku16jr5nfcg wordpress replicated 1/1 wordpress:latest
*:80->80/tcp
3.7、查看服務運行在那個節點上面 (備注:在node1節點上面操作)
$ docker service ps client
ID NAME IMAGE NODE DESIRED STATE CU
RRENT STATE ERROR PORTS
y002hyyst1r9 client.1 busybox:latest node3 Running Running 38 seconds ago
3.8、查看正在運行的容器 (備注:在node3節點上面操作)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe5034a2d77 busybox:latest "sh -c 'while true; …" About a minute ago Up About a minute client.1.y002hyyst1r9pqyf6ad14oruo
184989e17f76 jwilder/whoami:latest "/app/http" 23 minutes ago Up 23 minutes 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf
3.9、進入容器中 (備注:在node3節點上面操作)
$ docker exec -it dbe5 sh
/ # ping whoami
PING whoami (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: seq=0 ttl=64 time=0.136 ms
64 bytes from 10.0.0.8: seq=1 ttl=64 time=0.111 ms
64 bytes from 10.0.0.8: seq=2 ttl=64 time=0.082 ms
64 bytes from 10.0.0.8: seq=3 ttl=64 time=0.065 ms
64 bytes from 10.0.0.8: seq=4 ttl=64 time=0.065 ms
64 bytes from 10.0.0.8: seq=5 ttl=64 time=0.163 ms
4.0、擴容副本(備注:在node1節點上面操作)
$ docker service scale whoami=2
whoami scaled to 2
#查看服務運行在那個節點上面 (備注:在node1節點上面操作)
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE
CURRENT STATE ERROR PORTSuva1e307dwag whoami.1 jwilder/whoami:latest node3 Running
Running 28 minutes ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running
Running 16 seconds ago
$ docker ps (備注:在node3節點上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe5034a2d77 busybox:latest "sh -c 'while true; …" 8 minutes ago Up 8 minutes client.1.y002hyyst1r9pqyf6ad14oruo
184989e17f76 jwilder/whoami:latest "/app/http" 30 minutes ago Up 30 minutes 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf
#進入容器 (備注:在node3節點上面操作)
$ docker exec -it 1849 sh
/app # ping whoami
PING whoami (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: seq=0 ttl=64 time=0.208 ms
64 bytes from 10.0.0.8: seq=1 ttl=64 time=0.118 ms
64 bytes from 10.0.0.8: seq=2 ttl=64 time=0.074 ms
64 bytes from 10.0.0.8: seq=3 ttl=64 time=0.079 ms
/app # exit
#查看正在運行的容器
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
4fa28760a510 jwilder/whoami:latest "/app/http" 9 minutes ago Up 9 minutes
8000/tcp whoami.2.n4t7brq388umug2lagrz4uore
9f394dee1e89 mysql:5.7 "docker-entrypoint.s…" About an hour ago Up About an h
our 3306/tcp, 33060/tcp mysql.1.volm1hp3n2dkljqtb8h59e8sf
#進入容器中,發現增加了兩個容器的IP地址
$ docker exec -it 4fa2 sh
/app # nslookup tasks.whoami
nslookup: can't resolve '(null)': Name does not resolve
Name: tasks.whoami
Address 1: 10.0.0.15 4fa28760a510
Address 2: 10.0.0.9 whoami.1.uva1e307dwagnanuzodbqzfcf.demo
#擴容副本(備注:在node1節點上面操作)
$ docker service scale whoami=3
whoami scaled to 3
#查看服務運行在那個節點上面 (備注:在node1節點上面操作)
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running Running 40 minutes ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running Running 12 minutes ago
rl9b1uji65cg whoami.3 jwilder/whoami:latest node2 Running Running 8 seconds ago
#查看正在運行的容器,從上面看得知,是跑在node2節點上面 (備注:在node2節點上面操作)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27106f86d1e6 jwilder/whoami:latest "/app/http" About a minute ago Up About a minute 8000/tcp whoami.3.rl9b1uji65cg765ruk0t2tzee0aad26cdc500 wordpress:latest "docker-entrypoint.s…" About an hour ago Up About anhour 80/tcp wordpress.1.n16an94tcb0pz4xqoyf33asxw
#進入容器,發現有三個IP地址 (備注:在node2節點上面操作)
$ docker exec -it 2710 sh
/app # nslookup tasks.whoaminslookup: can't resolve '(null)': Name does not resolve
Name: tasks.whoami
Address 1: 10.0.0.16 27106f86d1e6
Address 2: 10.0.0.15 whoami.2.n4t7brq388umug2lagrz4uore.demo
Address 3: 10.0.0.9 whoami.1.uva1e307dwagnanuzodbqzfcf.demo/app #
#下載文件,查看內容 (備注:在node2節點上面操作)
/app # wget whoami:8000
Connecting to whoami:8000 (10.0.0.8:8000)index.html 100% |*****************************************************| 17 0:00:00 ETA
/app # lshttp index.html
/app # more index.html
I'm 27106f86d1e6
三、Routing Mesh的兩種體現
Internal: Container和Container之間的訪問通過overlay網絡(通過vip虛擬IP)
Ingress: 如果服務有綁定接口,則此服務可以通過任意swarm節點的相應接口訪問。
#Ingress Network
1、外部訪問的負載均衡
2、服務端口被暴露到各個swarm節點
3、內部通過IPVS進行負載均衡
#擴展成2個副本
$ docker service scale whoami=2
whoami scaled to 2
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
[node1] (local) root@192.168.0.38 ~
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE
CURRENT STATE ERROR PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running
Running about an hour ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running
Running 45 minutes ago
#訪問容器IP地址,發現有負載均衡的功能
$ curl 127.0.0.1:8000
I'm 4fa28760a510
$ curl 127.0.0.1:8000
I'm 184989e17f76
$ curl 127.0.0.1:8000
I'm 4fa28760a510
$ curl 127.0.0.1:8000
I'm 184989e17f76
$ curl 127.0.0.1:8000
I'm 4fa28760a510
#發現是通過iptables進行轉發的
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-INGRESS all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhereACCEPT all -- anywhere anywhere
DROP all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
Chain DOCKER-INGRESS (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8000
ACCEPT tcp -- anywhere anywhere state RELATED,ESTABLISHED tcp spt:8000
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere state RELATED,ESTABLISHED tcp spt:http
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
#查看docker網絡
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242df4ead86 no veth4b1f330
docker_gwbridge 8000.02426a3e3176 no veth45d7a41
#查看創建的網絡名稱:docker_gwbridge
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9cab66600884 bridge bridge local
2wec717ukiqj demo overlay swarm
4a441f5afc2a docker_gwbridge bridge local
9c195868e91f host host local
xso8vdd1rk6s ingress overlay swarm
79cd1d0fe915 none null local
#查看詳細信息
$ docker network inspect docker_gwbridge
[
{
"Name": "docker_gwbridge",
"Id": "4a441f5afc2a952577c52160e3ed863c78bdd30347dd25baa39c96d69f3dad96",
"Created": "2019-05-30T07:16:08.785013094Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"efa29bcf1389080372c7722aafe081526f6185b98d29dd7e4e5f0a16ab6e5b6e": {
"Name": "gateway_1cbaf00e6a8e",
"EndpointID": "b074b8b6c8193b2e95a79818fda30a1abfc91683dd08ceb6dbb11a60d496a07e",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "gateway_ingress-sbox",
"EndpointID": "21a7b9864747a7a1269f1648763852b6aa717ab3179c8264813406962c606cec",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.name": "docker_gwbridge"
},
"Labels": {}
}
]
#查看目錄下文件
$ ls /var/run/docker/netns
1-2wec717uki 1-xso8vdd1rk 1cbaf00e6a8e ingress_sbox lb_2wec717uk
#進入ingress里面
$ nsenter --net=/var/run/docker/netns/ingress_sbox 0d496a07e",
###############################################################
# WARNING!!!! #
# This is a sandbox environment. Using personal credentials #
# is HIGHLY! discouraged. Any consequences of doing so are #
# completely the user's responsibilites. #
# #
# The PWD team. # 62c606cec",
###############################################################
#查看ip
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 brd 10.255.0.5 scope global eth0
valid_lft forever preferred_lft forever
8: eth1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.2/16 brd 172.19.255.255 scope global eth1
valid_lft forever preferred_lft forever
$ iptables -nL -t mangle (備注:這里是做負載均衡的,在node1上面操作。)
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
MARK tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 MARK set 0x
101
Chain INPUT (policy ACCEPT)
target prot opt source destination 62c606cec",
MARK all -- 0.0.0.0/0 10.255.0.5 MARK set 0x101
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
$ exit
#下載LVS管理工具,yum install ipvsadm
#再次進入root,執行ipvsadm -l
#了解Ingress Network數據包的走向詳情
參考:
https://juejin.im/post/5b80363e5188254307741bf1#heading-4
https://www.jianshu.com/p/18ad7b838b0d # 41篇
https://blog.csdn.net/weixin_33672400/article/details/86917813 #46篇