自主式
Pod
對象由調度器調度到目標工作節點后即由相應節點上的kubelet
負責監控其容器的存活狀態,容器主進程崩潰后,kubelet
能夠自動重啟相應的容器。但對出現非主進程崩潰類的容器錯誤卻無從感知,這便依賴於pod
資源對象定義的存活探測,以便kubelet
能夠探知到此類故障。但若pod
被刪除或者工作節點自身發生故障(工作節點上都有kubelet
,kubelet
不可用,因此其健康狀態便無法保證),則便需要控制器來處理相應的容器重啟和配置。
常見的工作負載控制器
Pod
控制器由master
的kube-controller-manager
組件提供,常見的此類控制器有:ReplicationController
ReplicaSet:代用戶創建指定數量的
pod
副本數量,確保pod
副本數量符合預期狀態,並且支持滾動式自動擴容和縮容功能Deployment:工作在
ReplicaSet
之上,用於管理無狀態應用,目前來說最好的控制器。支持滾動更新和回滾功能,還提供聲明式配置。DaemonSet:用於確保集群中的每一個節點只運行特定的
pod
副本,常用於實現系統級**后台任務。比如ELK
服務StatefulSet:管理有狀態應用
Job:只要完成就立即退出,不需要重啟或重建
CronJob:周期性任務控制,不需要持續后台運行
Pod控制器概述
Kubernetes
的核心功能之一還在於要確保各資源對象的當前狀態(status
)以匹配用戶期望的狀態(spec
),使當前狀態不斷地向期望狀態“和解”(reconciliation
)來完成容器應用管理。而這些則是kube-controller-manager
的任務。創建為具體的控制器對象之后,每個控制器均通過
API Server
提供的接口持續監控相關資源對象的當前狀態,並在因故障、更新或其他原因導致系統狀態發生變化時,嘗試讓資源的當前狀態想期望狀態遷移和逼近。
List-Watch
是kubernetes
實現的核心機制之一,在資源對象的狀態發生變動時,由API Server
負責寫入etcd
並通過水平觸發(level-triggered
)機制主動通知給相關的客戶端程序以確保其不會錯過任何一個事件。控制器通過API Server
的watch
接口實時監控目標資源對象的變動並執行和解操作,但並不會與其他控制器進行任何交互。
Pod和Pod控制器
Pod
控制器資源通過持續性地監控集群中運行着的Pod
資源對象來確保受其管控的資源嚴格符合用戶期望的狀態,例如資源副本的數量要精確符合期望等。通常,一個Pod
控制器資源至少應該包含三個基本的組成部分:標簽選擇器:匹配並關聯
Pod
資源對象,並據此完成受其管控的Pod
資源計數。期望的副本數:期望在集群中精確運行着的
Pod
資源的對象數量。Pod模板:用於新建
Pod
資源對象的Pod
模板資源。
ReplicaSet概述
ReplicaSe
t是取代早期版本中的ReplicationController
控制器,其功能基本上與ReplicationController
相同
確保Pod資源對象的數量精確反映期望值:
ReplicaSet
需要確保由其控制運行的Pod副本數量精確吻合配置中定義的期望值,否則就會自動補足所缺或終止所余。確保Pod健康運行:探測到由其管控的
Pod
對象因其所在的工作節點故障而不可用時,自動請求由調度器於其他工作節點創建缺失的Pod
副本。彈性伸縮:可通過
ReplicaSet
控制器動態擴容或者縮容Pod
資源對象的數量。必要時還可以通過HPA
控制器實現Pod
資源規模的自動伸縮。
spec字段一般嵌套使用以下幾個屬性字段:
replicas <integer>:指定期望的Pod對象副本數量 selector <Object>:當前控制器匹配Pod對象副本的標簽選擇器,支持matchLabels和matchExpressions兩種匹配機制 template <Object>:用於定義Pod時的Pod資源信息 minReadySeconds <integer>:用於定義Pod啟動后多長時間為可用狀態,默認為0秒
#(1)命令行查看ReplicaSet清單定義規則 [root@k8s-master ~]# kubectl explain rs [root@k8s-master ~]# kubectl explain rs.spec [root@k8s-master ~]# kubectl explain rs.spec.template #(2)創建ReplicaSet示例 [root@k8s-master ~]# vim manfests/rs-demo.yaml apiVersion: apps/v1 #api版本定義 kind: ReplicaSet #定義資源類型為ReplicaSet metadata: #元數據定義 name: myapp namespace: default spec: #ReplicaSet的規格定義 replicas: 2 #定義副本數量為2個 selector: #標簽選擇器,定義匹配Pod的標簽 matchLabels: app: myapp release: canary template: #Pod的模板定義 metadata: #Pod的元數據定義 name: myapp-pod #自定義Pod的名稱 labels: #定義Pod的標簽,需要和上面的標簽選擇器內匹配規則中定義的標簽一致,可以多出其他標簽 app: myapp release: canary spec: #Pod的規格定義 containers: #容器定義 - name: myapp-containers #容器名稱 image: ikubernetes/myapp:v1 #容器鏡像 imagePullPolicy: IfNotPresent #拉取鏡像的規則 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #(3)創建ReplicaSet定義的Pod [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml replicaset.apps/myapp created [root@k8s-master ~]# kubectl get rs #查看創建的ReplicaSet控制器 NAME DESIRED CURRENT READY AGE myapp 4 4 4 3m23s [root@k8s-master ~]# kubectl get pods #通過查看pod可以看出pod命令是規則是前面是replicaset控制器的名稱加隨機生成的字符串 NAME READY STATUS RESTARTS AGE myapp-bln4v 1/1 Running 0 6s myapp-bxpzt 1/1 Running 0 6s #(4)修改Pod的副本數量 [root@k8s-master ~]# kubectl edit rs myapp replicas: 4 [root@k8s-master ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 4 4 4 2m50s myapp-containers ikubernetes/myapp:v2 app=myapp,release=canary [root@k8s-master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8hkcr 1/1 Running 0 2m2s app=myapp,release=canary myapp-bln4v 1/1 Running 0 3m40s app=myapp,release=canary myapp-bxpzt 1/1 Running 0 3m40s app=myapp,release=canary myapp-ql2wk 1/1 Running 0 2m2s app=myapp,release=canary
[root@k8s-master ~]# vim manfests/rs-demo.yaml spec: #Pod的規格定義 containers: #容器定義 - name: myapp-containers #容器名稱 image: ikubernetes/myapp:v2 #容器鏡像 imagePullPolicy: IfNotPresent #拉取鏡像的規則 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml #執行apply讓其重載 [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image myapp-bln4v ikubernetes/myapp:v1 myapp-bxpzt ikubernetes/myapp:v1 #說明:這里雖然重載了,但是已有的pod所使用的鏡像仍然是v1版本的,只是新建pod時才會使用v2版本,這里測試先手動刪除已有的pod。 [root@k8s-master ~]# kubectl delete pods -l app=myapp #刪除標簽app=myapp的pod資源 pod "myapp-bln4v" deleted pod "myapp-bxpzt" deleted [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #再次查看通過ReplicaSet新建的pod資源對象。鏡像已使用v2版本 Name Image myapp-mdn8j ikubernetes/myapp:v2 myapp-v5bgr ikubernetes/myapp:v2
擴容和縮容
[root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 2 2 2 154m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 5m26s myapp-v5bgr 1/1 Running 0 5m26s #擴容 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=5 #將上面的Deployments控制器myapp的Pod副本數量提升為5個 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 5 5 5 156m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-lrrp8 1/1 Running 0 8s myapp-mbqf8 1/1 Running 0 8s myapp-mdn8j 1/1 Running 0 6m48s myapp-ttmf5 1/1 Running 0 8s myapp-v5bgr 1/1 Running 0 6m48s #收縮 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=3 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 159m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 10m myapp-ttmf5 1/1 Running 0 3m48s myapp-v5bgr 1/1 Running 0 10m
[root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 162m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 12m myapp-ttmf5 1/1 Running 0 6m18s myapp-v5bgr 1/1 Running 0 12m [root@k8s-master ~]# kubectl delete replicasets myapp --cascade=false replicaset.extensions "myapp" deleted [root@k8s-master ~]# kubectl get rs No resources found. [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 13m myapp-ttmf5 1/1 Running 0 7m myapp-v5bgr 1/1 Running 0 13m #通過上面的示例可以看出,添加--cascade=false參數后再刪除ReplicaSet資源對象時並沒有將其管控的Pod資源對象一並刪除。
Deployment
(簡寫為deploy
)是kubernetes
控制器的又一種實現,它構建於ReplicaSet
控制器之上,可為Pod
和ReplicaSet
資源提供聲明式更新。
Deployment
控制器資源的主要職責是為了保證Pod
資源的健康運行,其大部分功能均可通過調用ReplicaSet
實現,同時還增添部分特性。
事件和狀態查看:必要時可以查看
Deployment
對象升級的詳細進度和狀態。回滾:升級操作完成后發現問題時,支持使用回滾機制將應用返回到前一個或由用戶指定的歷史記錄中的版本上。
版本記錄:對
Deployment
對象的每一個操作都予以保存,以供后續可能執行的回滾操作使用。暫停和啟動:對於每一次升級,都能夠隨時暫停和啟動。
多種自動更新方案:一是
Recreate
,即重建更新機制,全面停止、刪除舊有的Pod
后用新版本替代;另一個是RollingUpdate
,即滾動升級機制,逐步替換舊有的Pod
至新的版本。
Deployment其核心資源和ReplicaSet相似
#(1)命令行查看ReplicaSet清單定義規則 [root@k8s-master ~]# kubectl explain deployment [root@k8s-master ~]# kubectl explain deployment.spec [root@k8s-master ~]# kubectl explain deployment.spec.template #(2)創建Deployment示例 [root@k8s-master ~]# vim manfests/deploy-demo.yaml apiVersion: apps/v1 #api版本定義 kind: Deployment #定義資源類型為Deploymant metadata: #元數據定義 name: deploy-demo #deployment控制器名稱 namespace: default #名稱空間 spec: #deployment控制器的規格定義 replicas: 2 #定義副本數量為2個 selector: #標簽選擇器,定義匹配Pod的標簽 matchLabels: app: deploy-app release: canary template: #Pod的模板定義 metadata: #Pod的元數據定義 labels: #定義Pod的標簽,需要和上面的標簽選擇器內匹配規則中定義的標簽一致,可以多出其他標簽 app: deploy-app release: canary spec: #Pod的規格定義 containers: #容器定義 - name: myapp #容器名稱 image: ikubernetes/myapp:v1 #容器鏡像 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #(3)創建Deployment對象 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo created #(4)查看資源對象 [root@k8s-master ~]# kubectl get deployment #查看Deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 10s [root@k8s-master ~]# kubectl get replicaset #查看ReplicaSet資源對象 NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 2 2 2 20s [root@k8s-master ~]# kubectl get pods #查看Pod資源對象 NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-22btc 1/1 Running 0 23s deploy-demo-78c84d4449-5fn2k 1/1 Running 0 23s --- 說明: 通過查看資源對象可以看出,Deployment會自動創建相關的ReplicaSet控制器資源,並以"[DEPLOYMENT-name]-[POD-TEMPLATE-HASH-VALUE]"格式為其命名,其中的hash值由Deployment自動生成。而Pod名則是以ReplicaSet控制器的名稱為前綴,后跟5位隨機字符。
ReplicaSet控制器的應用更新需要手動分成多步並以特定的次序進行,過程繁雜且容易出錯,而Deployment卻只需要由用戶指定在Pod模板中要改動的內容,(如鏡像文件的版本),余下的步驟便會由其自動完成。Pod副本數量也是一樣。
Deployment控制器支持兩種更新策略:滾動更新(rollingUpdate)和重建創新(Recreate),默認為滾動更新
滾動更新(rollingUpdate):即在刪除一部分舊版本Pod資源的同時,補充創建一部分新版本的Pod對象進行應用升級,其優勢是升級期間,容器中應用提供的服務不會中斷,但更新期間,不同客戶端得到的相應內容可能會來自不同版本的應用。
重新創建(Recreate):即首先刪除現有的Pod對象,而后由控制器基於新模板重行創建出新版本的資源對象。
Deployment控制器的滾動更新操作並非在同一個ReplicaSet控制器對象下刪除並創建Pod資源,新控制器的Pod對象數量不斷增加,直到舊控制器不再擁有Pod對象,而新控制器的副本數量變得完全符合期望值為止。如圖所示
-
-
maxUnavailable:升級期間正常可用的
Pod
副本數(包括新舊版本)最多不能低於期望值的個數,其值可以是0
或正整數,也可以是期望值的百分比;默認值為1
,該值意味着如果期望值是3
,則升級期間至少要有兩個Pod
對象處於正常提供服務的狀態。
注:為了保存版本升級的歷史,需要在創建
Deployment
對象時於命令中使用“--record”
選項。
#打開1個終端進行升級 [root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v2 deployment.extensions/deploy-demo image updated #同時打開終端2進行查看pod資源對象升級過程 [root@k8s-master ~]# kubectl get pods -l app=deploy-app -w NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-2rvxr 1/1 Running 0 33s deploy-demo-78c84d4449-nd7rr 1/1 Running 0 33s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 2s deploy-demo-78c84d4449-2rvxr 1/1 Terminating 0 49s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 1s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 50s deploy-demo-78c84d4449-nd7rr 1/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s #同時打開終端3進行查看pod資源對象變更過程 [root@k8s-master ~]# kubectl get deployment deploy-demo -w NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 37s deploy-demo 2/2 2 2 47s deploy-demo 2/2 2 2 47s deploy-demo 2/2 0 2 47s deploy-demo 2/2 1 2 47s deploy-demo 3/2 1 3 49s deploy-demo 2/2 1 2 49s deploy-demo 2/2 2 2 49s deploy-demo 3/2 2 3 50s deploy-demo 2/2 2 2 51s # 升級完成再次查看rs的情況,以下可以看到原的rs作為備份,而現在啟動的是新的rs [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 0 0 0 4m41s deploy-demo-7c66dbf45b 2 2 2 3m54s
#1、使用kubectl scale命令擴容 [root@k8s-master ~]# kubectl scale deployment deploy-demo --replicas=3 deployment.extensions/deploy-demo scaled [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 10m deploy-demo-7c66dbf45b-gq2tw 1/1 Running 0 3s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 10m #2、使用直接修改配置清單方式進行擴容 [root@k8s-master ~]# vim manfests/deploy-demo.yaml spec: #deployment控制器的規格定義 replicas: 4 #定義副本數量為2個 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo configured [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 61s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 58s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 61s deploy-demo-78c84d4449-sfxps 1/1 Running 0 57s #3、使用kubectl patch打補丁的方式進行擴容 [root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"replicas":5}}' deployment.extensions/deploy-demo patched [root@k8s-master ~]# [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 3m44s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 3m41s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 3m44s deploy-demo-78c84d4449-sfxps 1/1 Running 0 3m40s deploy-demo-78c84d4449-t7jxb 1/1 Running 0 3s
1)添加其總數多余期望值一個
[root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/deploy-demo patched
2)啟動更新過程,在修改相應容器的鏡像版本后立即暫停更新進度。
[root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment deploy-demo deployment.extensions/deploy-demo image updated deployment.extensions/deploy-demo paused #查看 [root@k8s-master ~]# kubectl get deployment #查看deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 6/5 1 6 37m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod資源對象的name和image Name Image deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-78c84d4449-6rmnm ikubernetes/myapp:v1 deploy-demo-78c84d4449-9xfp9 ikubernetes/myapp:v1 deploy-demo-78c84d4449-c2m6h ikubernetes/myapp:v1 deploy-demo-78c84d4449-sfxps ikubernetes/myapp:v1 deploy-demo-78c84d4449-t7jxb ikubernetes/myapp:v1 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新情況 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... --- #通過上面查看可以看出,當前的pod數量為6個,因為此前我們定義的期望值為5個,這里多出了一個,且這個鏡像版本為v3版本。 #全部更新 [root@k8s-master ~]# kubectl rollout resume deployment deploy-demo deployment.extensions/deploy-demo resumed #再次查看 [root@k8s-master ~]# kubectl get deployment #查看deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 5/5 5 5 43m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod資源對象的name和image Name Image deploy-demo-6bf8dbdc9f-2z6gt ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-f79q2 ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-pjf4z ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-x7fnk ikubernetes/myapp:v3 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新情況 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... deployment "deploy-demo" successfully rolled out
1)回到上一個版本
[root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-78c84d4449-2xspz ikubernetes/myapp:v1 deploy-demo-78c84d4449-f8p46 ikubernetes/myapp:v1 deploy-demo-78c84d4449-mnmvc ikubernetes/myapp:v1 deploy-demo-78c84d4449-tsl7r ikubernetes/myapp:v1 deploy-demo-78c84d4449-xdt8j ikubernetes/myapp:v1
2)回滾到指定版本
#通過該命令查看更新歷史記錄 [root@k8s-master ~]# kubectl rollout history deployment/deploy-demo deployment.extensions/deploy-demo REVISION CHANGE-CAUSE 2 <none> 4 <none> 5 <none> #回滾到版本2 [root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo --to-revision=2 deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-7c66dbf45b-42nj4 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-8zhf5 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-bxw7x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-gmq8x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-mrfdb ikubernetes/myapp:v2
DaemonSet
用於在集群中的全部節點上同時運行一份指定Pod
資源副本,后續新加入集群的工作節點也會自動創建一個相關的Pod
對象,當從集群移除借點時,此類Pod
對象也將被自動回收而無需重建。管理員也可以使用節點選擇器及節點標簽指定僅在具有特定特征的節點上運行指定的Pod
對象。
應用場景
在各個節點上運行日志收集守護進程,如
fluentd
和logstash
。在各個節點上運行監控系統的代理守護進程,如
Prometheus Node Exporter
、collectd
、Datadog agent
、New Relic agent
和Ganglia gmond
等。
#(1) 定義清單文件 [root@k8s-master ~]# vim manfests/daemonset-demo.yaml apiVersion: apps/v1 #api版本定義 kind: DaemonSet #定義資源類型為DaemonSet metadata: #元數據定義 name: daemset-nginx #daemonset控制器名稱 namespace: default #名稱空間 labels: #設置daemonset的標簽 app: daem-nginx spec: #DaemonSet控制器的規格定義 selector: #指定匹配pod的標簽 matchLabels: #指定匹配pod的標簽 app: daem-nginx #注意:這里需要和template中定義的標簽一樣 template: #Pod的模板定義 metadata: #Pod的元數據定義 name: nginx labels: #定義Pod的標簽,需要和上面的標簽選擇器內匹配規則中定義的標簽一致,可以多出其他標簽 app: daem-nginx spec: #Pod的規格定義 containers: #容器定義 - name: nginx-pod #容器名字 image: nginx:1.12 #容器鏡像 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #暴露的端口 #(2)創建上面定義的daemonset控制器 [root@k8s-master ~]# kubectl apply -f manfests/daemonset-demo.yaml daemonset.apps/daemset-nginx created #(3)查看驗證 [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES daemset-nginx-7s474 1/1 Running 0 80s 10.244.1.61 k8s-node1 <none> <none> daemset-nginx-kxpl2 1/1 Running 0 94s 10.244.2.58 k8s-node2 <none> <none> [root@k8s-master ~]# kubectl describe daemonset/daemset-nginx ...... Name: daemset-nginx Selector: app=daem-nginx Node-Selector: <none> ...... Desired Number of Nodes Scheduled: 2 Current Number of Nodes Scheduled: 2 Number of Nodes Scheduled with Up-to-date Pods: 2 Number of Nodes Scheduled with Available Pods: 2 Number of Nodes Misscheduled: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed ......
注意
DaemonSet自Kubernetes1.6版本起也開始支持更新機制,相關配置嵌套在kubectl explain daemonset.spec.updateStrategy字段中。其支持RollingUpdate(滾動更新)和OnDelete(刪除時更新)兩種策略,滾動更新為默認的更新策略。
#(1)查看鏡像版本 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image NAME NODE Image daemset-nginx-7s474 k8s-node1 nginx:1.12 daemset-nginx-kxpl2 k8s-node2 nginx:1.12 #(2)更新 [root@k8s-master ~]# kubectl set image daemonset/daemset-nginx nginx-pod=nginx:1.14 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image #再次查看 NAME NODE Image daemset-nginx-74c95 k8s-node2 nginx:1.14 daemset-nginx-nz6n9 k8s-node1 nginx:1.14 #(3)查坎詳細信息 [root@k8s-master ~]# kubectl describe daemonset daemset-nginx ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-jjnc2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-jjnc2 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-kxpl2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-7s474 Normal SuccessfulDelete 15s daemonset-controller Deleted pod: daemset-nginx-7s474 Normal SuccessfulCreate 8s daemonset-controller Created pod: daemset-nginx-nz6n9 Normal SuccessfulDelete 5s daemonset-controller Deleted pod: daemset-nginx-kxpl2
DaemonSet
控制器的滾動更新機制也可以借助於minReadySeconds
字段控制滾動節奏;必要時也可以執行暫停和繼續操作。其也可以進行回滾操作。