iptables
iptables的方式則是利用了linux的iptables的nat轉發進行實現。在本例中,創建了名為mysql-service的service。
apiVersion: v1 kind: Service metadata: labels: name: mysql role: service name: mysql-service spec: ports: - port: 3306 targetPort: 3306 nodePort: 30964 type: NodePort selector: mysql-service: "true"
mysql-service對應的nodePort暴露出來的端口為30964,對應的cluster IP(10.254.162.44)的端口為3306,進一步對應於后端的pod的端口為3306。
mysql-service后端代理了兩個pod,ip分別是192.168.125.129和192.168.125.131。先來看一下iptables。
[root@localhost ~]# iptables -S -t nat ... -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-NODEPORTS -p tcp -m comment --comment "default/mysql-service:" -m tcp --dport 30964 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "default/mysql-service:" -m tcp --dport 30964 -j KUBE-SVC-67RL4FN6JRUPOJYM -A KUBE-SEP-ID6YWIT3F6WNZ47P -s 192.168.125.129/32 -m comment --comment "default/mysql-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-ID6YWIT3F6WNZ47P -p tcp -m comment --comment "default/mysql-service:" -m tcp -j DNAT --to-destination 192.168.125.129:3306 -A KUBE-SEP-IN2YML2VIFH5RO2T -s 192.168.125.131/32 -m comment --comment "default/mysql-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-IN2YML2VIFH5RO2T -p tcp -m comment --comment "default/mysql-service:" -m tcp -j DNAT --to-destination 192.168.125.131:3306 -A KUBE-SERVICES -d 10.254.162.44/32 -p tcp -m comment --comment "default/mysql-service: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-67RL4FN6JRUPOJYM -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-67RL4FN6JRUPOJYM -m comment --comment "default/mysql-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ID6YWIT3F6WNZ47P -A KUBE-SVC-67RL4FN6JRUPOJYM -m comment --comment "default/mysql-service:" -j KUBE-SEP-IN2YML2VIFH5RO2T
下面來逐條分析
首先如果是通過node的30964端口訪問,則會進入到以下鏈
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mysql-service:" -m tcp --dport 30964 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "default/mysql-service:" -m tcp --dport 30964 -j KUBE-SVC-67RL4FN6JRUPOJYM
然后進一步跳轉到KUBE-SVC-67RL4FN6JRUPOJYM的鏈
-A KUBE-SVC-67RL4FN6JRUPOJYM -m comment --comment "default/mysql-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ID6YWIT3F6WNZ47P -A KUBE-SVC-67RL4FN6JRUPOJYM -m comment --comment "default/mysql-service:" -j KUBE-SEP-IN2YML2VIFH5RO2T
這里利用了iptables的--probability的特性,使連接有50%的概率進入到KUBE-SEP-ID6YWIT3F6WNZ47P鏈,50%的概率進入到KUBE-SEP-IN2YML2VIFH5RO2T鏈。
KUBE-SEP-ID6YWIT3F6WNZ47P的鏈的具體作用就是將請求通過DNAT發送到192.168.125.129的3306端口。
-A KUBE-SEP-ID6YWIT3F6WNZ47P -s 192.168.125.129/32 -m comment --comment "default/mysql-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-ID6YWIT3F6WNZ47P -p tcp -m comment --comment "default/mysql-service:" -m tcp -j DNAT --to-destination 192.168.125.129:3306
同理KUBE-SEP-IN2YML2VIFH5RO2T的作用是通過DNAT發送到192.168.125.131的3306端口。
-A KUBE-SEP-IN2YML2VIFH5RO2T -s 192.168.125.131/32 -m comment --comment "default/mysql-service:" -j KUBE-MARK-MASQ -A KUBE-SEP-IN2YML2VIFH5RO2T -p tcp -m comment --comment "default/mysql-service:" -m tcp -j DNAT --to-destination 192.168.125.131:3306
分析完nodePort的工作方式,接下里說一下clusterIP的訪問方式。
對於直接訪問cluster IP(10.254.162.44)的3306端口會直接跳轉到KUBE-SVC-67RL4FN6JRUPOJYM。
-A KUBE-SERVICES -d 10.254.162.44/32 -p tcp -m comment --comment "default/mysql-service: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-67RL4FN6JRUPOJYM
root@ubuntu:~# cat web-ngx-svc.yml apiVersion: v1 kind: Service metadata: name: nodeport-svc spec: type: NodePort selector: app: web-nginx ports: - protocol: TCP port: 3000 targetPort: 8087 nodePort: 30090
root@ubuntu:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 244d my-nginx ClusterIP 10.110.79.116 <none> 8280/TCP 37d my-nginx-np NodePort 10.99.1.231 <none> 8081:31199/TCP 36d nodeport-svc NodePort 10.97.11.232 <none> 3000:30090/TCP 60m web2 NodePort 10.110.171.213 <none> 8097:31866/TCP 20d
service cluster IP
10.97.11.232 3000
root@ubuntu:~# iptables -S -t nat | grep KUBE-SVC-GFPAJ7EGCNM4QF4H -N KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport-svc:" -m tcp --dport 30090 -j KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-SERVICES -d 10.97.11.232/32 -p tcp -m comment --comment "default/nodeport-svc: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OACTDSBVK7HADL5Z -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -j KUBE-SEP-4WFPOQNWZU6CJPY5
目的地址是10.97.11.232 3000 執行
-j KUBE-SVC-GFPAJ7EGCNM4QF4H
root@ubuntu:~# iptables -S -t nat | grep KUBE-SVC-GFPAJ7EGCNM4QF4H -N KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport-svc:" -m tcp --dport 30090 -j KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-SERVICES -d 10.97.11.232/32 -p tcp -m comment --comment "default/nodeport-svc: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-GFPAJ7EGCNM4QF4H
-A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OACTDSBVK7HADL5Z -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -j KUBE-SEP-4WFPOQNWZU6CJPY5
dnat
KUBE-SEP-OACTDSBVK7HADL5Z和KUBE-SEP-4WFPOQNWZU6CJPY5
root@ubuntu:~# iptables -S -t nat | grep KUBE-SEP-OACTDSBVK7HADL5Z -N KUBE-SEP-OACTDSBVK7HADL5Z -A KUBE-SEP-OACTDSBVK7HADL5Z -s 10.244.0.22/32 -m comment --comment "default/nodeport-svc:" -j KUBE-MARK-MASQ -A KUBE-SEP-OACTDSBVK7HADL5Z -p tcp -m comment --comment "default/nodeport-svc:" -m tcp -j DNAT [unsupported revision] -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OACTDSBVK7HADL5Z root@ubuntu:~# iptables -S -t nat | grep KUBE-SEP-4WFPOQNWZU6CJPY5 -N KUBE-SEP-4WFPOQNWZU6CJPY5 -A KUBE-SEP-4WFPOQNWZU6CJPY5 -s 10.244.2.6/32 -m comment --comment "default/nodeport-svc:" -j KUBE-MARK-MASQ -A KUBE-SEP-4WFPOQNWZU6CJPY5 -p tcp -m comment --comment "default/nodeport-svc:" -m tcp -j DNAT [unsupported revision] -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -j KUBE-SEP-4WFPOQNWZU6CJPY5 root@ubuntu:~#
KUBE DNAT [unsupported revision]
nodeport
root@ubuntu:~# iptables -S -t nat | grep 30090 -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport-svc:" -m tcp --dport 30090 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport-svc:" -m tcp --dport 30090 -j KUBE-SVC-GFPAJ7EGCNM4QF4H
先mark(后續會做snat),然后執行
KUBE-SVC-GFPAJ7EGCNM4QF4H
KUBE-SVC-GFPAJ7EGCNM4QF4H
root@ubuntu:~# iptables -S -t nat | grep KUBE-SVC-GFPAJ7EGCNM4QF4H -N KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport-svc:" -m tcp --dport 30090 -j KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-SERVICES -d 10.97.11.232/32 -p tcp -m comment --comment "default/nodeport-svc: cluster IP" -m tcp --dport 3000 -j KUBE-SVC-GFPAJ7EGCNM4QF4H -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OACTDSBVK7HADL5Z -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -j KUBE-SEP-4WFPOQNWZU6CJPY5
root@ubuntu:~# iptables -S -t nat | grep KUBE-SEP-OACTDSBVK7HADL5Z -N KUBE-SEP-OACTDSBVK7HADL5Z -A KUBE-SEP-OACTDSBVK7HADL5Z -s 10.244.0.22/32 -m comment --comment "default/nodeport-svc:" -j KUBE-MARK-MASQ -A KUBE-SEP-OACTDSBVK7HADL5Z -p tcp -m comment --comment "default/nodeport-svc:" -m tcp -j DNAT [unsupported revision] -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OACTDSBVK7HADL5Z
root@ubuntu:~# iptables -S -t nat | grep KUBE-SEP-4WFPOQNWZU6CJPY5 -N KUBE-SEP-4WFPOQNWZU6CJPY5 -A KUBE-SEP-4WFPOQNWZU6CJPY5 -s 10.244.2.6/32 -m comment --comment "default/nodeport-svc:" -j KUBE-MARK-MASQ -A KUBE-SEP-4WFPOQNWZU6CJPY5 -p tcp -m comment --comment "default/nodeport-svc:" -m tcp -j DNAT [unsupported revision] -A KUBE-SVC-GFPAJ7EGCNM4QF4H -m comment --comment "default/nodeport-svc:" -j KUBE-SEP-4WFPOQNWZU6CJPY5 root@ubuntu:~#
KUBE-MARK-MASQ和snat
root@ubuntu:~# iptables -nvL -t nat | grep 10.97.11.232 3 180 KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.97.11.232 /* default/nodeport-svc: cluster IP */ tcp dpt:3000 3 180 KUBE-SVC-GFPAJ7EGCNM4QF4H tcp -- * * 0.0.0.0/0 10.97.11.232 /* default/nodeport-svc: cluster IP */ tcp dpt:3000
root@ubuntu:~# iptables -S -t nat | grep KUBE-MARK-MASQ -N KUBE-MARK-MASQ -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
root@ubuntu:~# iptables -S -t nat | grep KUBE-POSTROUTING -N KUBE-POSTROUTING -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE root@ubuntu:~# iptables -S -t nat | grep KUBE-PREROUTING root@ubuntu:~#
對nat表的自定義鏈“KUBE-POSTROUTING”寫入如下規則:
-A KUBE-POSTROUTING -m comment –comment “kubernetes service traffic requiring SNAT” -m mark –mark 0x4000/0x4000 -j MASQUERADE
3、 對nat表的自定義鏈“KUBE-MARK-MASQ”寫入如下規則:
-A KUBE-MARK-MASQ -j MARK –set-xmark 0x4000/0x4000
這里2和3做的事情的實際含義是kubernetes會讓所有kubernetes集群內部產生的數據包流經nat表的自定義鏈“KUBE-MARK-MASQ”,然后在這里kubernetes會對這些數據包打一個標記(0x4000/0x4000),接着在nat的自定義鏈“KUBE-POSTROUTING”中根據上述標記匹配所有的kubernetes集群內部的數據包,匹配的目的是kubernetes會對這些包做SNAT操作。