一、service資源資源基礎應用
1、service資源清單
--- myapp-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: http --- myapp-svc.yaml kind: Service apiVersion: v1 metadata: name: myapp-svc spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80
2、創建資源
[root@master chapter5]# kubectl apply -f myapp-deploy.yaml deployment.apps/myapp-deploy created [root@master chapter6]# kubectl apply -f myapp-svc.yaml service/myapp-svc created
3、驗證
[root@master chapter6]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18d myapp-svc ClusterIP 10.111.104.25 <none> 80/TCP 6s [root@master chapter6]# kubectl get endpoints myapp-svc NAME ENDPOINTS AGE myapp-svc 10.244.0.72:80,10.244.0.73:80,10.244.2.154:80 71s
4、向service對象請求服務
[root@master ~]# kubectl exec -it busybox sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. /home # curl http://10.111.104.25:80 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> /home # for loop in 1 2 3 4;do curl http://10.111.104.25:80/hostname.html;done /home # for loop in `seq 10`;do curl http://10.111.104.25:80/hostname.html;done myapp-deploy-5cbd66595b-7s94h myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-fzgcr myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq
二、會話粘性
1、sessionAffinity字段含義
[root@master chapter6]# kubectl explain svc.spec.sessionAffinity KIND: Service VERSION: v1 FIELD: sessionAffinity <string> DESCRIPTION: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
None:不使用sessionAffinity、默認值
ClientIP:基於客戶端IP地址識別客戶端身份、把來自同一個源IP地址的請求始終調度至同一個POD對象
2、修改此前myapp-svc使用Session Affinity機制
[root@master chapter6]# kubectl patch service myapp-svc -p '{"spec": {"sessionAffinity": "ClientIP"}}' service/myapp-svc patched
3、驗證會話粘性效果
[root@master chapter6]# kubectl exec -it busybox sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. /home # for loop in 1 2 3 4;do curl http://10.111.104.25:80/hostname.html;done myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq /home # for loop in `seq 10`;do curl http://10.111.104.25:80/hostname.html;done myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq myapp-deploy-5cbd66595b-nlpxq
三、服務發現方式
1、服務發現方式:環境變量
1、kubernetes service 環境變量
kubernetes為每個service資源生成包括以下形式的環境變量在內的一系列環境變量、在同一個名稱空間中創建的pod對象都會自動
擁有這些變量
{SVCNAME}_SERVICE_HOST {SVCNAME}_SERVICE_PORT
如果SVCNAME中使用了連接線、那么kubernetes會在定義為環境變量時將其轉換為下划線
2、Docker Link形式的環境變量
/ # printenv |grep MYAPP MYAPP_SVC_PORT_80_TCP_ADDR=10.98.57.156 MYAPP_SVC_PORT_80_TCP_PORT=80 MYAPP_SVC_PORT_80_TCP_PROTO=tcp MYAPP_SVC_PORT_80_TCP=tcp://10.98.57.156:80 MYAPP_SVC_SERVICE_HOST=10.98.57.156 MYAPP_SVC_PORT=tcp://10.98.57.156:80 MYAPP_SVC_SERVICE_PORT=80
2、ClusterDNS和服務發現
alertmanager-main-1.alertmanager-operated. A: read udp 10.244.0.82:39751-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:45.27078922Z"} alertmanager-main-2.alertmanager-operated. AAAA: read udp 10.244.0.82:39147-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:46.290351841Z"} alertmanager-main-1.alertmanager-operated. A: read udp 10.244.0.82:55091-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:47.040067334Z"} alertmanager-main-1.alertmanager-operated. AAAA: read udp 10.244.0.82:50521-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:47.04012648Z"} alertmanager-main-1.alertmanager-operated. A: read udp 10.244.0.82:59715-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:47.273682812Z"} alertmanager-main-1.alertmanager-operated. ANY: dial tcp 218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:47.291473366Z"} alertmanager-main-2.alertmanager-operated. A: read udp 10.244.0.82:56171-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:49.278965657Z"} alertmanager-main-1.alertmanager-operated. AAAA: read udp 10.244.0.82:47290-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:49.295755243Z"} alertmanager-main-2.alertmanager-operated. A: read udp 10.244.0.82:46048-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:51.05192443Z"} alertmanager-main-1.alertmanager-operated. AAAA: read udp 10.244.0.82:55961-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:51.297235485Z"} alertmanager-main-2.alertmanager-operated. A: read udp 10.244.0.82:47002-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:53.053631874Z"} alertmanager-main-2.alertmanager-operated. AAAA: read udp 10.244.0.82:41021-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:53.303883147Z"} alertmanager-main-2.alertmanager-operated. AAAA: read udp 10.244.0.82:58909-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:55.305017334Z"} alertmanager-main-1.alertmanager-operated. AAAA: read udp 10.244.0.82:37589-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:57.312243213Z"} alertmanager-main-1.alertmanager-operated. AAAA: read udp 10.244.0.82:40354-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:08:59.314302823Z"} alertmanager-main-2.alertmanager-operated. ANY: dial tcp 8.8.8.8:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:09:12.972172135Z"} alertmanager-main-1.alertmanager-operated. ANY: dial tcp 8.8.8.8:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:09:13.175673423Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:38018-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:10:56.443739327Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:50209-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:13:06.47535098Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:37913-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:13:06.475834378Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:40603-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:13:11.475643832Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:55579-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:15:56.563127798Z"} grafana.com. A: read udp 10.244.0.82:32968-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:17:50.49961549Z"} raw.githubusercontent.com. AAAA: read udp 10.244.0.82:37557-\u003e8.8.8.8:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:17:55.879321934Z"} raw.githubusercontent.com. AAAA: read udp 10.244.0.82:56239-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:17:58.879968186Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:55239-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:18:41.588908206Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:37380-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:18:46.588852056Z"} 1.0.244.10.in-addr.arpa. PTR: read udp 10.244.0.82:54808-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:18:52.583971405Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:47436-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:20:56.628149749Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:46755-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:21:36.631153451Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:46023-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:21:41.63077241Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:56997-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:21:41.63119581Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:40320-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:22:41.64384947Z"} common-service-weave-scope.weave-scope.svc. AAAA: read udp 10.244.0.82:55737-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:23:16.654371006Z"} common-service-weave-scope.weave-scope.svc. A: read udp 10.244.0.82:38601-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:23:21.652466109Z"} 109.1.244.10.in-addr.arpa. PTR: read udp 10.244.0.82:32915-\u003e218.30.19.40:53: i/o timeout\n","stream":"stdout","time":"2020-07-30T02:24:18.520335799Z"}
3、服務發現式DNS
/home # cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 /home # nslookup myapp-svc.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: myapp-svc.default Address 1: 10.111.104.25 myapp-svc.default.svc.cluster.local /home # nslookup prometheus-operator.monitoring Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: prometheus-operator.monitoring Address 1: 10.244.1.16 10-244-1-16.prometheus-operator.monitoring.svc.cluster.local /home # nslookup kube-state-metrics.monitoring Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-state-metrics.monitoring Address 1: 10.244.1.17 10-244-1-17.kube-state-metrics.monitoring.svc.cluster.local
四、Service的四種類型
1、ClusterIP
通過集群內部IP地址暴露服務、此地址僅在集群內部可達、而無法被集群外部的客戶端訪問如圖6-8所示、此為默認service類型
[root@master chapter6]# cat myapp-svc.yaml kind: Service apiVersion: v1 metadata: name: myapp-svc spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80 [root@master chapter6]# kubectl apply -f myapp-svc.yaml service/myapp-svc unchanged [root@master chapter6]# kubectl get svc myapp-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-svc ClusterIP 10.111.104.25 <none> 80/TCP 2d4h
2、NodePort
建立在ClusterIP之上、其在每個節點的IP地址的某靜態端口暴露服務、因此、它依然會為serbice分配集群IP地址、並將此作為NodePort的路由目標
簡單來說NodePort類型就是在工作節點的IP地址上選擇一個端口用於將集群外部的用戶請求轉發到至目標service的ClusterIP和port
因此這種類型的service即可如ClusterIP一樣受到集群內部客戶端pod的訪問、也會受到集群外部客戶端通過套接字<NodeIP>:<NodePort>進行的請求
1、創建驗證資源:
[root@master chapter6]# cat myapp-svc-nodeport.yaml kind: Service apiVersion: v1 metadata: name: myapp-svc-nodeport spec: type: NodePort selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 30008 [root@master chapter6]# kubectl apply -f myapp-svc-nodeport.yaml service/myapp-svc-nodeport created [root@master chapter6]# kubectl get svc myapp-svc-nodeport NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-svc-nodeport NodePort 10.107.241.246 <none> 80:30008/TCP 21s
2、瀏覽器訪問
3、LoadBalancer 阿里雲ELB
會指向關聯至集群外部的切實存在的某個負載均衡設備,該設備通過工作節點之上的NodePort向集群內部發送請求流量
[root@master chapter6]# cat myapp-svc-lb.yaml kind: Service apiVersion: v1 metadata: name: myapp-svc-lb spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80
例如阿里雲計算環境中的ELB實例即為此類的負載均衡設備、此類型的優勢在於、他能夠把來自於集群外部客戶端的請求調度至所有節點的NodePort之上,而不是依賴客戶端自行決定連接至那個節點、
從而避免了因客戶端指定的節點故障而導致的服務不可用
4、ExternalName:內部pod訪問外部資源
1、資源清單
[root@master chapter6]# cat external-redis.yaml kind: Service apiVersion: v1 metadata: name: external-www-svc spec: type: ExternalName externalName: www.kubernetes.io ports: - protocol: TCP port: 6379 targetPort: 6379 nodePort: 0 selector: {}
2、創建
[root@master chapter6]# kubectl apply -f external-redis.yaml service/external-www-svc created
3、驗證效果
[root@master chapter6]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-www-svc ExternalName <none> www.kubernetes.io 6379/TCP 7s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d myapp-svc ClusterIP 10.111.104.25 <none> 80/TCP 95m
4、解析驗證
/home # nslookup external-www-svc Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: external-www-svc Address 1: 147.75.40.148 [root@master chapter6]# ping www.kubernetes.io PING kubernetes.io (147.75.40.148) 56(84) bytes of data. 64 bytes from 147.75.40.148 (147.75.40.148): icmp_seq=1 ttl=49 time=180 ms 64 bytes from 147.75.40.148 (147.75.40.148): icmp_seq=2 ttl=49 time=180 ms 64 bytes from 147.75.40.148 (147.75.40.148): icmp_seq=3 ttl=49 time=180 ms
五、headless:直達pod不經過service
客戶端需要直接訪問Service資源后端的所有Pod資源、這時就應該向客戶端暴露每個pod資源的ip地址、而不再是中間層service對象的ClusterIP這種資源便稱為headless Service
1、創建資源
[root@master chapter6]# cat myapp-headless-svc.yaml kind: Service apiVersion: v1 metadata: name: myapp-headless-svc spec: clusterIP: None #只需要將clusterIP字段的值設置為:"None" selector: app: myapp ports: - port: 80 targetPort: 80 name: httpport
只需要將clusterIP字段的值設置為:"None"
2、創建運行
[root@master chapter6]# kubectl apply -f myapp-headless-svc.yaml service/myapp-headless-svc created
3、重點看有無ClusterIP
[root@master chapter6]# kubectl describe svc myapp-headless-svc ..... Endpoints: 10.244.0.81:80,10.244.0.83:80 .....
和其他類型區別無ClusterIP
headless Service對象沒有ClusterIP、於是kube-proxy便無需處理此類請求、也就更沒有了負載均衡或代理它的需要、在前端應用擁有自有的其他服務發現機制時、headless Service即可以省去定義ClusterIP的需求
至於如何為此類Service資源配置ip地址、則取決於它的標簽選擇器的定義
六、pod資源發現
/home # nslookup myapp-headless-svc Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: myapp-headless-svc Address 1: 10.244.0.81 10-244-0-81.myapp-svc.default.svc.cluster.local Address 2: 10.244.0.83 10-244-0-83.myapp-headless-svc.default.svc.cluster.local
具有標簽選擇器:端點控制器會在api中為其創建Endpoints記錄、並將ClusterDNS服務中的A記錄直接解析到此Service后端的各pod對象的ip地址上
沒有標簽選擇器:端點控制器不會在api中為其創建Endpoints記錄、ClusterDNS的配置分為兩種情形:
1、對ExternalName類型的服務創建CNAME記錄
2、對其他三種類型來說、為那些於當前Service共享名稱的所有Endpoints對象創建一條記錄