本文收錄在容器技術學習系列文章總目錄
1、Pod控制器
1.1 介紹
Pod控制器是用於實現管理pod的中間層,確保pod資源符合預期的狀態,pod的資源出現故障時,會嘗試 進行重啟,當根據重啟策略無效,則會重新新建pod的資源。
1.2 pod控制器有多種類型
- ReplicationController(RC):RC保證了在所有時間內,都有特定數量的Pod副本正在運行,如果太多了,RC就殺死幾個,如果太少了,RC會新建幾個
- ReplicaSet(RS):代用戶創建指定數量的pod副本數量,確保pod副本數量符合預期狀態,並且支持滾動式自動擴容和縮容功能。
- Deployment(重要):工作在ReplicaSet之上,用於管理無狀態應用,目前來說最好的控制器。支持滾動更新和回滾功能,還提供聲明式配置。
- DaemonSet:用於確保集群中的每一個節點只運行特定的pod副本,通常用於實現系統級后台任務。比如ELK服務
- Job:只要完成就立即退出,不需要重啟或重建。
- CronJob:周期性任務控制,不需要持續后台運行
- StatefulSet:管理有狀態應用
本文主要講解ReplicaSet、Deployment、DaemonSet 三中類型的pod控制器。
2、ReplicaSet
2.1 認識ReplicaSet
(1)什么是ReplicaSet?
ReplicaSet是下一代復本控制器,是Replication Controller(RC)的升級版本。ReplicaSet和 Replication Controller之間的唯一區別是對選擇器的支持。ReplicaSet支持labels user guide中描述的set-based選擇器要求, 而Replication Controller僅支持equality-based的選擇器要求。
(2)如何使用ReplicaSet
大多數kubectl 支持Replication Controller 命令的也支持ReplicaSets。rolling-update命令除外,如果要使用rolling-update,請使用Deployments來實現。
雖然ReplicaSets可以獨立使用,但它主要被 Deployments用作pod 機制的創建、刪除和更新。當使用Deployment時,你不必擔心創建pod的ReplicaSets,因為可以通過Deployment實現管理ReplicaSets。
(3)何時使用ReplicaSet?
ReplicaSet能確保運行指定數量的pod。然而,Deployment 是一個更高層次的概念,它能管理ReplicaSets,並提供對pod的更新等功能。因此,我們建議你使用Deployment來管理ReplicaSets,除非你需要自定義更新編排。
這意味着你可能永遠不需要操作ReplicaSet對象,而是使用Deployment替代管理 。后續講解Deployment會詳細描述。
2.2 ReplicaSet定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: ReplicaSet 類型
- metadata 元數據
- spec 期望狀態
- minReadySeconds:應為新創建的pod准備好的最小秒數
- replicas:副本數; 默認為1
- selector:標簽選擇器
- template:模板(必要的)
- metadata:模板中的元數據
- spec:模板中的期望狀態
- status 當前狀態
2.3 演示:創建一個簡單的ReplicaSet
(1)編寫yaml文件,並創建啟動
簡單創建一個replicaset:啟動2個pod
[root@master manifests]# vim rs-damo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: qa spec: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master manifests]# kubectl create -f rs-damo.yaml replicaset.apps/myapp created
(2)查詢驗證
---查詢replicaset(rs)信息 [root@master manifests]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 2 2 2 23s ---查詢pod信息 [root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-r4ss4 1/1 Running 0 25s myapp-zjc5l 1/1 Running 0 26s ---查詢pod詳細信息;模板中的label都生效了 [root@master manifests]# kubectl describe pod myapp-r4ss4 Name: myapp-r4ss4 Namespace: default Priority: 0 PriorityClassName: <none> Node: node2/192.168.130.105 Start Time: Thu, 06 Sep 2018 14:57:23 +0800 Labels: app=myapp environment=qa release=canary ... ... ---驗證服務 [root@master manifests]# curl 10.244.2.13 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
(3)生成pod原則:“多退少補”
① 刪除pod,會立即重新構建,生成新的pod
[root@master manifests]# kubectl delete pods myapp-zjc5l pod "myapp-k4j6h" deleted [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-r4ss4 1/1 Running 0 33s myapp-mdjvh 1/1 Running 0 10s
② 若另一個pod,不小心符合了rs的標簽選擇器,就會隨機干掉一個此標簽的pod
---隨便啟動一個pod [root@master manifests]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1/1 Running 0 7m app=myapp,environment=qa,release=canary myapp-mdjvh 1/1 Running 0 6m app=myapp,environment=qa,release=canary pod-test 1/1 Running 0 13s app=myapp,tier=frontend ---將pod-test打上release=canary標簽 [root@master manifests]# kubectl label pods pod-test release=canary pod/pod-test labeled ---隨機停掉一個pod [root@master manifests]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1/1 Running 0 8m app=myapp,environment=qa,release=canary myapp-mdjvh 1/1 Running 0 7m app=myapp,environment=qa,release=canary pod-test 0/1 Terminating 0 1m app=myapp,release=canary,tier=frontend
2.4 ReplicaSet動態擴容/縮容
(1)使用edit 修改rs 配置,將副本數改為5;即可實現動態擴容
[root@master manifests]# kubectl edit rs myapp ... ... spec: replicas: 5 ... ... replicaset.extensions/myapp edited
(2)驗證
[root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 1d myapp-bck7l 1/1 Running 0 16s myapp-h8cqr 1/1 Running 0 16s myapp-hfb72 1/1 Running 0 6m myapp-r4ss4 1/1 Running 0 9m myapp-vvpgf 1/1 Running 0 16s
2.5 ReplicaSet在線升級版本
(1)使用edit 修改rs 配置,將容器的鏡像改為v2版;即可實現在線升級版本
[root@master manifests]# kubectl edit rs myapp ... ... spec: containers: - image: ikubernetes/myapp:v2 ... ... replicaset.extensions/myapp edited
(2)查詢rs,已經完成修改
[root@master manifests]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 5 5 5 11m myapp-container ikubernetes/myapp:v2 app=myapp,release=canary
(3)但是,修改完並沒有升級
需刪除pod,再自動生成新的pod時,就會升級成功;
即可以實現灰度發布:刪除一個,會自動啟動一個版本升級成功的pod
---訪問沒有刪除pod的服務,顯示是V1版 [root@master manifests]# curl 10.244.2.15 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> ---刪除一個pod,訪問新生成pod的服務,版本升級為v2版 [root@master manifests]# kubectl delete pod myapp-bck7l pod "myapp-bck7l" deleted [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-hxgbh 1/1 Running 0 20m 10.244.1.17 node1 [root@master manifests]# curl 10.244.1.17 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
3、Deployment
3.1 Deployment簡述
(1)介紹
Deployment 為 Pod和Replica Set 提供了一個聲明式定義(declarative)方法,用來替代以前的ReplicationController來方便的管理應用
你只需要在 Deployment 中描述您想要的目標狀態是什么,Deployment controller 就會幫您將 Pod 和ReplicaSet 的實際狀態改變到您的目標狀態。您可以定義一個全新的 Deployment 來創建 ReplicaSet 或者刪除已有的 Deployment 並創建一個新的來替換。
注意:您不該手動管理由 Deployment 創建的 Replica Set,否則您就篡越了 Deployment controller 的職責!
(2)典型的應用場景包括
- 使用Deployment來創建ReplicaSet。ReplicaSet在后台創建pod。檢查啟動狀態,看它是成功還是失敗。
- 然后,通過更新Deployment 的 PodTemplateSpec 字段來聲明Pod的新狀態。這會創建一個新的ReplicaSet,Deployment會按照控制的速率將pod從舊的ReplicaSet移動到新的ReplicaSet中。
- 滾動升級和回滾應用:如果當前狀態不穩定,回滾到之前的Deployment revision。每次回滾都會更新Deployment的revision。
- 擴容和縮容:擴容Deployment以滿足更高的負載。
- 暫停和繼續Deployment:暫停Deployment來應用PodTemplateSpec的多個修復,然后恢復上線。
- 根據Deployment 的狀態判斷上線是否hang住了。
- 清除舊的不必要的 ReplicaSet。
3.2 Deployment定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: Deployment 類型
- metadata 元數據
- spec 期望狀態
- --------------replicaset 也有的選項---------------
- minReadySeconds:應為新創建的pod准備好的最小秒數
- replicas:副本數; 默認為1
- selector:標簽選擇器
- template:模板(必須的)
- metadata:模板中的元數據
- spec:模板中的期望狀態
- --------------deployment 獨有的選項---------------
- strategy:更新策略;用於將現有pod替換為新pod的部署策略
- Recreate:重新創建
- RollingUpdate:滾動更新
- maxSurge:可以在所需數量的pod之上安排的最大pod數;例如:5、10%
- maxUnavailable:更新期間可用的最大pod數;
- revisionHistoryLimit:要保留以允許回滾的舊ReplicaSet的數量,默認10
- paused:表示部署已暫停,部署控制器不處理部署
- progressDeadlineSeconds:在執行此操作之前部署的最長時間被認為是失敗的
- status 當前狀態
3.3 演示:創建一個簡單的Deployment
(1)創建一個簡單的ReplicaSet,啟動2個pod
[root@master manifests]# vim deploy-damo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master manifests]# kubectl apply -f deploy-damo.yaml deployment.apps/myapp-deploy configured
注:apply 聲明式創建啟動;和create差不多;但是可以對一個文件重復操作;create不可以。
(2)查詢驗證
---查詢deployment信息 [root@master manifests]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp-deploy 2 2 2 2 14s ---查詢replicaset信息;deployment會先生成replicaset [root@master manifests]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp-deploy-69b47bc96d 2 2 2 28s ---查詢pod信息;replicaset會再創建pod [root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-bm8zc 1/1 Running 0 18s myapp-deploy-69b47bc96d-pjr5v 1/1 Running 0 18s
3.4 Deployment動態擴容/縮容
有2中方法實現
(1)方法1:直接修改yaml文件,將副本數改為3
[root@master manifests]# vim deploy-damo.yaml ... ... spec: replicas: 3 ... ... [root@master manifests]# kubectl apply -f deploy-damo.yaml deployment.apps/myapp-deploy configured
查詢驗證成功:有3個pod
[root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-bcdnq 1/1 Running 0 25s myapp-deploy-69b47bc96d-bm8zc 1/1 Running 0 2m myapp-deploy-69b47bc96d-pjr5v 1/1 Running 0 2m
(2)通過patch命令打補丁命令擴容
與方法1的區別:不需修改yaml文件;平常測試時使用方便;
但列表格式復雜,極容易出錯
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' deployment.extensions/myapp-deploy patched
查詢驗證成功:有5個pod
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-67f6f6b4dc-2756p 1/1 Running 0 26s myapp-deploy-67f6f6b4dc-2lkwr 1/1 Running 0 26s myapp-deploy-67f6f6b4dc-knttd 1/1 Running 0 21m myapp-deploy-67f6f6b4dc-ms7t2 1/1 Running 0 21m myapp-deploy-67f6f6b4dc-vl2th 1/1 Running 0 21m
3.5 Deployment在線升級版本
(1)直接修改deploy-damo.yaml
[root@master manifests]# vim deploy-damo.yaml ... ... spec: containers: - name: myapp image: ikubernetes/myapp:v2 ... ...
(2)可以動態監控版本升級
[root@master ~]# kubectl get pods -w
發現是滾動升級,先停一個,再開啟一個新的(升級);再依次聽一個...
(3)驗證:訪問服務,版本升級成功
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-deploy-67f6f6b4dc-6lv66 1/1 Running 0 2m 10.244.1.75 node1 [root@master ~]# curl 10.244.1.75 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
3.6 Deployment修改版本更新策略
(1)方法1:修改yaml文件
[root@master manifests]# vim deploy-damo.yaml ... ... strategy: rollingUpdate: maxSurge: 1 #每次更新一個pod maxUnavailable: 0 #最大不可用pod為0 ... ...
(2)打補丁:修改更新策略
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/myapp-deploy patched
(3)驗證:查詢詳細信息
[root@master manifests]# kubectl describe deployment myapp-deploy ... ... RollingUpdateStrategy: 0 max unavailable, 1 max surge ... ...
(4)升級到v3版
① 金絲雀發布:先更新一個pod,然后立即暫停,等版本運行沒問題了,再繼續發布
[root@master manifests]# kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy deployment.extensions/myapp-deploy image updated #一個pod更新成功 deployment.extensions/myapp-deploy paused #暫停更新
② 等版本運行沒問題了,解除暫停,繼續發布更新
[root@master manifests]# kubectl rollout resume deployment myapp-deploy deployment.extensions/myapp-deploy resumed
③ 中間可以一直監控過程
[root@master ~]# kubectl rollout status deployment myapp-deploy #輸出版本更新信息 Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... ---也可以使用get查詢pod 更新過程 [root@master ~]# kubectl get pods -w
④ 驗證:隨便訪問一個pod的服務,版本升級成功
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-deploy-6bdcd6755d-2bnsl 1/1 Running 0 1m 10.244.1.77 node1 [root@master ~]# curl 10.244.1.77 Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>
3.7 Deployment版本回滾
(1)命令
查詢版本變更歷史
$ kubectl rollout history deployment deployment_name
undo回滾版本;--to-revision= 回滾到第幾版本
$ kubectl rollout undo deployment deployment_name --to-revision=N
(2)演示
---查詢版本變更歷史 [root@master manifests]# kubectl rollout history deployment myapp-deploy deployments "myapp-deploy" REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> ---回滾到第1版本 [root@master manifests]# kubectl rollout undo deployment myapp-deploy --to-revision=1 deployment.extensions/myapp-deploy [root@master manifests]# kubectl rollout history deployment myapp-deploy deployments "myapp-deploy" REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 <none>
(3)查詢驗證,已經回到v1版了
[root@master manifests]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp-deploy-67f6f6b4dc 0 0 0 18h myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary myapp-deploy-69b47bc96d 5 5 5 18h myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary myapp-deploy-6bdcd6755d 0 0 0 10m myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
4、DaemonSet
4.1 DaemonSet簡述
(1)介紹
DaemonSet保證在每個Node上都運行一個容器副本,常用來部署一些集群的日志、監控或者其他系統管理應用
(2)典型的應用包括
- 日志收集,比如fluentd,logstash等
- 系統監控,比如Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等
- 系統程序,比如kube-proxy, kube-dns, glusterd, ceph等
4.2 DaemonSet定義資源清單幾個字段
- apiVersion: app/v1 版本
- kind: DaemonSet 類型
- metadata 元數據
- spec 期望狀態
- --------------replicaset 也有的選項---------------
- minReadySeconds:應為新創建的pod准備好的最小秒數
- selector:標簽選擇器
- template:模板(必須的)
- metadata:模板中的元數據
- spec:模板中的期望狀態
- --------------daemonset 獨有的選項---------------
- revisionHistoryLimit:要保留以允許回滾的舊ReplicaSet的數量,默認10
- updateStrategy:用新pod替換現有DaemonSet pod的更新策略
- status 當前狀態
4.3 演示:創建一個簡單的DaemonSet
(1)創建並創建一個簡單的DaemonSet,啟動pod,只后台運行filebeat手機日志服務
[root@master manifests]# vim ds-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info [root@master manifests]# kubectl apply -f ds-demo.yaml daemonset.apps/myapp-ds created
(2)查詢驗證
[root@master ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 2 2 2 2 2 <none> 6m [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE filebeat-ds-r25hh 1/1 Running 0 4m filebeat-ds-vvntb 1/1 Running 0 4m [root@master ~]# kubectl exec -it filebeat-ds-r25hh -- /bin/sh / # ps aux PID USER TIME COMMAND 1 root 0:00 /usr/local/bin/filebeat -e -c /etc/filebeat/filebeat.yml
4.4 DaemonSet動態版本升級
(1)使用kubectl set image 命令更新pod的鏡像;實現版本升級
[root@master ~]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine daemonset.extensions/filebeat-ds image updated
(2)驗證,升級成功
[root@master ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR filebeat-ds 2 2 2 2 2 <none> 7m filebeat ikubernetes/filebeat:5.6.6-alpine app=filebeat,release=stable
5、StatefulSet
5.1 認識statefulset
(1)statefulset介紹
StatefulSet是為了解決有狀態服務的問題(對應Deployments和ReplicaSets是為無狀態服務而設計),其應用場景包括
- 穩定的持久化存儲,即Pod重新調度后還是能訪問到相同的持久化數據,基於PVC來實現
- 穩定的網絡標志,即Pod重新調度后其PodName和HostName不變,基於Headless Service(即沒有Cluster IP的Service)來實現
- 有序部署,有序擴展,即Pod是有順序的,在部署或者擴展的時候要依據定義的順序依次依次進行(即從0到N-1,在下一個Pod運行之前所有之前的Pod必須都是Running和Ready狀態),基於init containers來實現
- 有序收縮,有序刪除(即從N-1到0)
(2)三個必要組件
從上面的應用場景可以發現,StatefulSet由以下幾個部分組成:
- 用於定義網絡標志(DNS domain)的 Headless Service(無頭服務)
- 定義具體應用的StatefulSet控制器
- 用於創建PersistentVolumes 的 volumeClaimTemplates存儲卷模板
5.2 通過statefulset創建pod
5.2.1 創建准備pv
詳情請查詢PV和PVC詳解,創建5個pv,需要有nfs服務器
[root@master volume]# vim pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: nfs accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 15Gi [root@master volume]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Available 3s pv002 5Gi RWO Retain Available 3s pv003 5Gi RWO,RWX Retain Available 3s pv004 10Gi RWO,RWX Retain Available 3s pv005 15Gi RWO,RWX Retain Available 3s
5.2.2 編寫使用statefulset創建pod的資源清單,並創建
[root@master pod_controller]# vim statefulset-demo.yaml #Headless Service apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- #statefuleset apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html #volumeClaimTemplates volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi [root@master pod_controller]# kubectl apply -f statefulset-demo.yaml service/myapp created statefulset.apps/myapp created
5.2.3 查詢並驗證pod
---無頭服務的service創建成功 [root@master pod_controller]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 173d myapp ClusterIP None <none> 80/TCP 3s ---statefulset創建成功 [root@master pod_controller]# kubectl get sts NAME DESIRED CURRENT AGE myapp 3 3 6s ---查看pvc,已經成功綁定時候的pv [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 9s myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 8s myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 6s ---查看pv,有3個已經被綁定 [root@master pod_controller]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 21s pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 21s pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 21s pv004 10Gi RWO,RWX Retain Available 21s pv005 15Gi RWO,RWX Retain Available 21s ---啟動了3個pod [root@master pod_controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-0 1/1 Running 0 16s 10.244.1.127 node1 myapp-1 1/1 Running 0 15s 10.244.2.124 node2 myapp-2 1/1 Running 0 13s 10.244.1.128 node1
5.3 statefulset動態擴容和縮容
可以使用scale命令 或 patch打補丁兩種方法實現。
5.3.1 擴容
由原本的3個pod擴容到5個
---①使用scale命令實現 [root@master ~]# kubectl scale sts myapp --replicas=5 statefulset.apps/myapp scaled ---②或者通過打補丁來實現 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"replicas":5}}' statefulset.apps/myapp patched [root@master pod_controller]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 11m myapp-1 1/1 Running 0 11m myapp-2 1/1 Running 0 11m myapp-3 1/1 Running 0 9s myapp-4 1/1 Running 0 7s [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 11m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 11m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 11m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 13s myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 11s [root@master pod_controller]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 17m pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 17m pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 17m pv004 10Gi RWO,RWX Retain Bound default/myappdata-myapp-3 17m pv005 15Gi RWO,RWX Retain Bound default/myappdata-myapp-4 17m
5.3.2 縮容
由5個pod擴容到2個
---①使用scale命令 [root@master ~]# kubectl scale sts myapp --replicas=2 statefulset.apps/myapp scaled ---②通過打補丁的方法進行縮容 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' statefulset.apps/myapp patched [root@master pod_controller]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 15m myapp-1 1/1 Running 0 15m ---但是pv和pvc不會被刪除,從而實現持久化存儲 [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 15m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 15m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 15m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 4m myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 4m
5.4 版本升級
5.4.1 升級配置介紹:rollingUpdate.partition 分區更新
[root@master ~]# kubectl explain sts.spec.updateStrategy.rollingUpdate.partition KIND: StatefulSet VERSION: apps/v1 FIELD: partition <integer> DESCRIPTION: Partition indicates the ordinal at which the StatefulSet should be partitioned. Default value is 0.
解釋:partition分區指定為n,升級>n的分區;n指第幾個容器;默認是0
可以修改yaml資源清單來進行升級;也可通過打補丁的方法升級。
5.4.2 進行“金絲雀”升級
(1)先升級一個pod
先將pod恢復到5個
① 打補丁,將partition的指設為4,就只升級第4個之后的pod;只升級第5個pod,若新版本有問題,立即回滾;若沒問題,就全面升級
[root@master ~]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' statefulset.apps/myapp patched ---查詢認證 [root@master ~]# kubectl describe sts myapp Name: myapp Namespace: default ... ... Replicas: 5 desired | 5 total Update Strategy: RollingUpdate Partition: 4 ... ...
② 升級
[root@master ~]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 statefulset.apps/myapp image updated ---已將pod鏡像換位v2版 [root@master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 5 5 21h myapp ikubernetes/myapp:v2
③ 驗證
查看第5個pod,已經完成升級 [root@master ~]# kubectl get pods myapp-4 -o yaml |grep image - image: ikubernetes/myapp:v2 查看前4個pod,都還是v1版本 [root@master ~]# kubectl get pods myapp-3 -o yaml |grep image - image: ikubernetes/myapp:v1 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v1
(2)全面升級剩下的pod
---只需將partition的指設為0即可 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}' statefulset.apps/myapp patched ---驗證,所有pod已經完成升級 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v2