Service 的概念
Kubernetes Service 定義了這樣一種抽象:一個 Pod 的邏輯分組,一種可以訪問它們的策略 —— 通常稱為微
服務。 這一組 Pod 能夠被 Service 訪問到,通常是通過 Label Selector
Service能夠提供負載均衡的能力,但是在使用上有以下限制:
只提供 4 層負載均衡能力,而沒有 7 層功能,但有時我們可能需要更多的匹配規則來轉發請求,這點上 4 層負載均衡是不支持的
Service 的類型
Service 在 K8s 中有以下四種類型
- ClusterIp:默認類型,自動分配一個僅 Cluster 內部可以訪問的虛擬 IP
- NodePort:在 ClusterIP 基礎上為 Service 在每台機器上綁定一個端口,這樣就可以通過 : NodePort 來訪問該服務
- LoadBalancer:在 NodePort 的基礎上,借助 cloud provider 創建一個外部負載均衡器,並將請求轉發到: NodePort
- ExternalName:把集群外部的服務引入到集群內部來,在集群內部直接使用。沒有任何類型代理被創建,這只有 kubernetes 1.7 或更高版本的 kube-dns 才支持
VIP 和 Service 代理
在 Kubernetes 集群中,每個 Node 運行一個 kube-proxy 進程。 kube-proxy 負責為 Service 實現了一種
VIP(虛擬 IP)的形式,而不是 ExternalName 的形式。 在 Kubernetes v1.0 版本,代理完全在 userspace。在
Kubernetes v1.1 版本,新增了 iptables 代理,但並不是默認的運行模式。 從 Kubernetes v1.2 起,默認就是
iptables 代理。 在 Kubernetes v1.8.0-beta.0 中,添加了 ipvs 代理。
在 Kubernetes 1.14 版本開始默認使用 ipvs 代理。
在 Kubernetes v1.0 版本, Service 是 “4層”(TCP/UDP over IP)概念。 在 Kubernetes v1.1 版本,新增了
Ingress API(beta 版),用來表示 “7層”(HTTP)服務。
代理模式的分類
Ⅰ、userspace 代理模式
Ⅱ、iptables 代理模式
Ⅲ、ipvs 代理模式
這種模式,kube-proxy 會監視 Kubernetes Service 對象和 Endpoints ,調用 netlink 接口以相應地創建
ipvs 規則並定期與 Kubernetes Service 對象和 Endpoints 對象同步 ipvs 規則,以確保 ipvs 狀態與期望一
致。訪問服務時,流量將被重定向到其中一個后端 Pod
與 iptables 類似,ipvs 於 netfilter 的 hook 功能,但使用哈希表作為底層數據結構並在內核空間中工作。這意
味着 ipvs 可以更快地重定向流量,並且在同步代理規則時具有更好的性能。此外,ipvs 為負載均衡算法提供了更
多選項,例如:
rr :輪詢調度
lc :最小連接數
dh :目標哈希
sh :源哈希
sed :最短期望延遲
nq : 不排隊調度
ClusterIP
clusterIP 主要在每個 node 節點使用 iptables,將發向 clusterIP 對應端口的數據,轉發到 kube-proxy 中。然
后 kube-proxy 自己內部實現有負載均衡的方法,並可以查詢到這個 service 下對應 pod 的地址和端口,進而把
數據轉發給對應的 pod 的地址和端口
為了實現圖上的功能,主要需要以下幾個組件的協同工作:
apiserver 用戶通過kubectl命令向apiserver發送創建service的命令,apiserver接收到請求后將數據存儲
到etcd中
kube-proxy kubernetes的每個節點中都有一個叫做kube-porxy的進程,這個進程負責感知service,pod
的變化,並將變化的信息寫入本地的iptables規則中
iptables 使用NAT等技術將virtualIP的流量轉至endpoint中
[root@k8s-master mnt]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h [root@k8s-master mnt]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:https rr -> 192.168.180.130:sun-sr-https Masq 1 3 0 TCP 10.96.0.10:domain rr -> 10.244.0.6:domain Masq 1 0 0 -> 10.244.0.7:domain Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.6:9153 Masq 1 0 0 -> 10.244.0.7:9153 Masq 1 0 0 UDP 10.96.0.10:domain rr -> 10.244.0.6:domain Masq 1 0 0 -> 10.244.0.7:domain Masq 1 0 0 [root@k8s-master mnt]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.180.130:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.6:9153 Masq 1 0 0 -> 10.244.0.7:9153 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0
可以看出訪問的試本機6443端口
yaml文件
[root@k8s-master mnt]# cat svc-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: stabel
template:
metadata:
labels:
app: myapp
release: stabel
env: test
spec:
containers:
- name: myapp
image: wangyanglinux/myapp:v2
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
[root@k8s-master mnt]# cat myapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
type: ClusterIP
selector:
app: myapp
release: stabel
ports:
- name: http
port: 80
targetPort: 80
[root@k8s-master mnt]#
測試
[root@k8s-master mnt]# vim svc-deployment.yaml [root@k8s-master mnt]# kubectl apply -f svc-deployment.yaml deployment.apps/myapp-deploy created [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-55c8657767-5jzt4 1/1 Running 0 5s myapp-deploy-55c8657767-6tkc4 0/1 ContainerCreating 0 5s myapp-deploy-55c8657767-hw96w 0/1 ContainerCreating 0 5s [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-55c8657767-5jzt4 1/1 Running 0 12s myapp-deploy-55c8657767-6tkc4 1/1 Running 0 12s myapp-deploy-55c8657767-hw96w 1/1 Running 0 12s [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-55c8657767-5jzt4 1/1 Running 0 13s myapp-deploy-55c8657767-6tkc4 1/1 Running 0 13s myapp-deploy-55c8657767-hw96w 1/1 Running 0 13s [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-55c8657767-5jzt4 1/1 Running 0 17s 10.244.1.26 k8s-node02 <none> <none> myapp-deploy-55c8657767-6tkc4 1/1 Running 0 17s 10.244.2.29 k8s-node01 <none> <none> myapp-deploy-55c8657767-hw96w 1/1 Running 0 17s 10.244.2.30 k8s-node01 <none> <none> [root@k8s-master mnt]# curl 10.244.2.30 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# vim myapp-service.yaml [root@k8s-master mnt]# kubectl create -f myapp-service.yaml service/myapp created [root@k8s-master mnt]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h myapp ClusterIP 10.111.227.210 <none> 80/TCP 5s [root@k8s-master mnt]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.180.130:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.6:9153 Masq 1 0 0 -> 10.244.0.7:9153 Masq 1 0 0 TCP 10.111.227.210:80 rr -> 10.244.1.26:80 Masq 1 0 0 -> 10.244.2.29:80 Masq 1 0 0 -> 10.244.2.30:80 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 [root@k8s-master mnt]# curl 10.111.227.210 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# curl 10.111.227.210 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# curl 10.111.227.210 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-hw96w [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-6tkc4 [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-5jzt4 [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-hw96w [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-6tkc4 [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-5jzt4 [root@k8s-master mnt]# curl 10.111.227.210/hostname.html myapp-deploy-55c8657767-hw96w
Headless Service
有時不需要或不想要負載均衡,以及單獨的 Service IP 。遇到這種情況,可以通過指定 Cluster
IP(spec.clusterIP) 的值為 “None” 來創建 Headless Service 。這類 Service 並不會分配 Cluster IP, kube-
proxy 不會處理它們,而且平台也不會為它們進行負載均衡和路由。
[root@k8s-master mnt]# cat svc-headless.yaml apiVersion: v1 kind: Service metadata: name: myapp-headless namespace: default spec: selector: app: myapp clusterIP: "None" ports: - port: 80 targetPort: 80 [root@k8s-master mnt]#
[root@k8s-master mnt]# vim svc-headless.yaml [root@k8s-master mnt]# kubectl create -f svc-headless.yaml service/myapp-headless created [root@k8s-master mnt]# kube kubeadm kubectl kubelet [root@k8s-master mnt]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h myapp ClusterIP 10.111.227.210 <none> 80/TCP 8m myapp-headless ClusterIP None <none> 80/TCP 7s [root@k8s-master mnt]# kubectl create -f svc-headless.yaml Error from server (AlreadyExists): error when creating "svc-headless.yaml": services "myapp-headless" already exists [root@k8s-master mnt]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-58cc8c89f4-9gn5g 1/1 Running 2 3d23h coredns-58cc8c89f4-xxzx7 1/1 Running 2 3d23h etcd-k8s-master 1/1 Running 3 3d23h kube-apiserver-k8s-master 1/1 Running 3 3d23h kube-controller-manager-k8s-master 1/1 Running 6 3d23h kube-flannel-ds-amd64-4bc88 1/1 Running 3 3d23h kube-flannel-ds-amd64-lzwd6 1/1 Running 4 3d23h kube-flannel-ds-amd64-vw4vn 1/1 Running 5 3d23h kube-proxy-bs8sd 1/1 Running 3 3d23h kube-proxy-nfvtt 1/1 Running 2 3d23h kube-proxy-rn98b 1/1 Running 3 3d23h kube-scheduler-k8s-master 1/1 Running 5 3d23h [root@k8s-master mnt]# dig ;; Warning: Message parser reports malformed message packet. ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 326 ;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 27 ;; WARNING: Message has 8 extra bytes at end ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 5 IN NS h.root-servers.net. . 5 IN NS e.root-servers.net. . 5 IN NS d.root-servers.net. . 5 IN NS m.root-servers.net. . 5 IN NS k.root-servers.net. . 5 IN NS g.root-servers.net. . 5 IN NS l.root-servers.net. . 5 IN NS c.root-servers.net. . 5 IN NS j.root-servers.net. . 5 IN NS i.root-servers.net. . 5 IN NS f.root-servers.net. . 5 IN NS b.root-servers.net. . 5 IN NS a.root-servers.net. ;; ADDITIONAL SECTION: a.root-servers.net. 5 IN A 198.41.0.4 b.root-servers.net. 5 IN A 199.9.14.201 c.root-servers.net. 5 IN A 192.33.4.12 d.root-servers.net. 5 IN A 199.7.91.13 e.root-servers.net. 5 IN A 192.203.230.10 f.root-servers.net. 5 IN A 192.5.5.241 g.root-servers.net. 5 IN A 192.112.36.4 h.root-servers.net. 5 IN A 198.97.190.53 i.root-servers.net. 5 IN A 192.36.148.17 j.root-servers.net. 5 IN A 192.58.128.30 k.root-servers.net. 5 IN A 193.0.14.129 l.root-servers.net. 5 IN A 199.7.83.42 m.root-servers.net. 5 IN A 202.12.27.33 a.root-servers.net. 5 IN AAAA 2001:503:ba3e::2:30 b.root-servers.net. 5 IN AAAA 2001:500:200::b ;; Query time: 6 msec ;; SERVER: 192.168.180.2#53(192.168.180.2) ;; WHEN: 一 12月 23 22:16:55 CST 2019 ;; MSG SIZE rcvd: 512 [root@k8s-master mnt]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-58cc8c89f4-9gn5g 1/1 Running 2 3d23h 10.244.0.7 k8s-master <none> <none> coredns-58cc8c89f4-xxzx7 1/1 Running 2 3d23h 10.244.0.6 k8s-master <none> <none> etcd-k8s-master 1/1 Running 3 3d23h 192.168.180.130 k8s-master <none> <none> kube-apiserver-k8s-master 1/1 Running 3 3d23h 192.168.180.130 k8s-master <none> <none> kube-controller-manager-k8s-master 1/1 Running 6 3d23h 192.168.180.130 k8s-master <none> <none> kube-flannel-ds-amd64-4bc88 1/1 Running 3 3d23h 192.168.180.136 k8s-node02 <none> <none> kube-flannel-ds-amd64-lzwd6 1/1 Running 4 3d23h 192.168.180.130 k8s-master <none> <none> kube-flannel-ds-amd64-vw4vn 1/1 Running 5 3d23h 192.168.180.135 k8s-node01 <none> <none> kube-proxy-bs8sd 1/1 Running 3 3d23h 192.168.180.135 k8s-node01 <none> <none> kube-proxy-nfvtt 1/1 Running 2 3d23h 192.168.180.136 k8s-node02 <none> <none> kube-proxy-rn98b 1/1 Running 3 3d23h 192.168.180.130 k8s-master <none> <none> kube-scheduler-k8s-master 1/1 Running 5 3d23h 192.168.180.130 k8s-master <none> <none> [root@k8s-master mnt]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.0.7 ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.0.7 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44455 ;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-headless.default.svc.cluster.local. IN A ;; ANSWER SECTION: myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.29 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.26 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.30 ;; Query time: 199 msec ;; SERVER: 10.244.0.7#53(10.244.0.7) ;; WHEN: 一 12月 23 22:18:21 CST 2019 ;; MSG SIZE rcvd: 237 [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-55c8657767-5jzt4 1/1 Running 0 16m 10.244.1.26 k8s-node02 <none> <none> myapp-deploy-55c8657767-6tkc4 1/1 Running 0 16m 10.244.2.29 k8s-node01 <none> <none> myapp-deploy-55c8657767-hw96w 1/1 Running 0 16m 10.244.2.30 k8s-node01 <none> <none>
NodePort
nodePort 的原理在於在 node 上開了一個端口,將向該端口的流量導入到 kube-proxy,然后由 kube-proxy 進
一步到給對應的 pod。
[root@k8s-master mnt]# cat NodePort.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: type: NodePort selector: app: myapp release: stabel ports: - name: http port: 80 targetPort: 80 [root@k8s-master mnt]#
測試:
[root@k8s-master mnt]# vim NodePort.yaml [root@k8s-master mnt]# kubectl create -f NodePort.yaml Error from server (AlreadyExists): error when creating "NodePort.yaml": services "myapp" already exists [root@k8s-master mnt]# kubectl apply -f NodePort.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply service/myapp configured [root@k8s-master mnt]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d myapp NodePort 10.111.227.210 <none> 80:31103/TCP 14m myapp-headless ClusterIP None <none> 80/TCP 6m26s [root@k8s-master mnt]# netstat -antp |grep 31103 tcp6 0 0 :::31103 :::* LISTEN 3974/kube-proxy [root@k8s-master mnt]#