六、Kubernetes的資源對象之Pod控制器的基礎


一、pod控制器的說明

1、什么是pod控制器?

自主式Pod對象由調度器綁定至目標工作節點后即由相應節點上的kubelet負責監控其容器的存活性,容器主進程崩潰后,kubelet能夠自動重啟相應的容器。不過,kubelet對非主進程崩潰類的容器錯誤卻無從感知,這依賴於用戶為Pod資源對象自定義的存活性探測(liveness probe)機制,以便kubelet能夠探知到此類故障。然而,在Pod對象遭到意外刪除,或者工作節點自身發生故障時,又該如何處理呢? kubelet是Kubernetes集群節點代理程序,它在每個工作節點上都運行着一個實例。因而,集群中的某工作節點發生故障時,其kubelet也必將不再可用,於是,節點上的Pod資源的健康狀態將無從得到保證,也無法再由kubelet重啟。此種場景中的Pod存活性一般要由工作節點之外的Pod控制器來保證。事實上,遭到意外刪除的Pod資源的恢復也依賴於其控制器。 Pod控制器由master的kube-controller-manager組件提供,常見的此類控制器有ReplicationController、ReplicaSet、Deployment、DaemonSet、StatefulSet、Job和CronJob等,它們分別以不同的方式管理Pod資源對象。實踐中,對Pod對象的管理通常都是由某種控制器的特定對象來實現的,包括其創建、刪除及重新調度等操作。

Master的各組件中,API Server僅負責將資源存儲於etcd中,並將其變動通知給各相關的客戶端程序,如kubelet、kube-scheduler、kube-proxy和kube-controller-manager等,kube-scheduler監控到處於未綁定狀態的Pod對象出現時遂啟動調度器為其挑選適配的工作節點,然而,Kubernetes的核心功能之一還在於要確保各資源對象的當前狀態(status)以匹配用戶期望的狀態(spec),使當前狀態不斷地向期望狀態“和解”(reconciliation)來完成容器應用管理,而這些則是kube-controller-manager的任務。kube-controller-manager是一個獨立的單體守護進程,然而它包含了眾多功能不同的控制器類型分別用於各類和解任務.

2、常用的pod控制器

pod控制器是K8s的一個抽象概念,用於更高級層次對象,部署和管理Pod。 常用工作負載控制器: •Deployment :無狀態應用部署 •StatefulSet :有狀態應用部署 •DaemonSet :確保所有Node運行同一個Pod •Job :一次性任務 •Cronjob :定時任務

3、控制器的作用

•管理Pod對象 •使用標簽與Pod關聯 •控制器實現了Pod的運維,例如滾動更新、伸縮、副本管理、維護Pod狀態等。

 

二、deployment

1、deployment的介紹

Deployment(簡寫為deploy)是Kubernetes控制器的又一種實現,它構建於ReplicaSet控制器之上,可為Pod和ReplicaSet資源提供聲明式更新。相比較而言,Pod和ReplicaSet是較低級別的資源,它們很少被直接使用。

Deployment控制器為 Pod 和 ReplicaSet 提供了一個聲明式更新的方法,在Deployment對象中描述一個期望的狀態,Deployment控制器就會按照一定的控制速率把實際狀態改成期望狀態,通過定義一個Deployment控制器會創建一個新的ReplicaSets控制器,通過replicaset創建pod,刪除Deployment控制器,也會刪除Deployment控制器下對應的ReplicaSet控制器和pod資源

Deployment控制器資源的主要職責同樣是為了保證Pod資源的健康運行,其大部分功能均可通過調用ReplicaSet控制器來實現,同時還增添了部分特性。

·事件和狀態查看:必要時可以查看Deployment對象升級的詳細進度和狀態。·

回滾:升級操作完成后發現問題時,支持使用回滾機制將應用返回到前一個或由用戶指定的歷史記錄中的版本上。·

版本記錄:對Deployment對象的每一次操作都予以保存,以供后續可能執行的回滾操作使用。·

暫停和啟動:對於每一次升級,都能夠隨時暫停和啟動。·

多種自動更新方案:一是Recreate,即重建更新機制,全面停止、刪除舊有的Pod后用新版本替代;另一個是RollingUpdate,即滾動升級機制,逐步替換舊有的Pod至新的版本。

Deployment可以用來管理藍綠發布的情況,建立在rs之上的,一個Deployment可以管理多個rs,有多個rs存在,但實際運行的只有一個,當你更新到一個新版本的時候,只是創建了一個新的rs,把舊的rs替換掉了

 

rs的v1控制三個pod,刪除一個,在rs的v2上重新建立一個,依次類推,直到全部都是由rs2控制,如果rs v2有問題,還可以回滾,Deployment是建構在rs之上的,多個rs組成一個Deployment,但是只有一個rs處於活躍狀態。

Deployment默認保留10個歷史版本。

Deployment可以使用聲明式定義,直接在命令行通過純命令的方式完成對應資源版本的內容的修改,也就是通過打補丁的方式進行修改;Deployment能提供滾動式自定義自控制的更新;對Deployment來講,我們在實現更新時還可以實現控制更新節奏和更新邏輯,什么叫做更新節奏和更新邏輯呢?

比如說ReplicaSet控制5個pod副本,pod的期望值是5個,但是升級的時候需要額外多幾個pod,那么我們控制器可以控制在5個pod副本之外還能再增加幾個pod副本;比方說能多一個,但是不能少,那么升級的時候就是先增加一個,再刪除一個,增加一個刪除一個,始終保持pod副本數是5個,但是有個別交叉之間是6個;還有一種情況,最多允許多一個,最少允許少一個,也就是最多6個,最少4個,第一次加一個,刪除兩個,第二次加兩個,刪除兩個,依次類推,可以自己控制更新方式,這種是滾動更新的,需要加readinessProbe和livenessProbe探測,確保pod中容器里的應用都正常啟動了才刪除之前的pod;啟動的第一步,剛更新第一批就暫停了也可以;假如目標是5個,允許一個也不能少,允許最多可以10個,那一次加5個即可;這就是我們可以自己控制節奏來控制更新的方法

應用場景:網站、API、微服務

 

2、deployment的部署

編寫deployment.yml

apiVersion: apps/v1
kind: Deployment #類型為deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3  #副本數
  selector:    #匹配標簽,必須與template中定義的標簽一樣
    matchLabels:
      app: nginx
  template:    #定義pod模板
    metadata:
      labels:
        app: nginx  #pod的標簽與deployment選擇的標簽一致
    spec: #定義容器
      containers:
      - name: nginx
        image: nginx:1.14.2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          initialDelaySeconds: 3
          periodSeconds: 10
          httpGet:
            port: 80
            path: /index.html
        readinessProbe:
          initialDelaySeconds: 3
          periodSeconds: 10
          httpGet:
            port: 80
            path: /index.html

        

運行deployment

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml 
deployment.apps/nginx-deployment created

查看deployment和pod信息

#1、創建deployment
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments -n default
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           54s

#2、deployment創建replicasets
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get replicasets -n default
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-657df44b4f   3         3         3       82s

#3、replicasets再創建pod
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n default
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-657df44b4f-hk8x5   1/1     Running   0          109s
nginx-deployment-657df44b4f-mcw7v   1/1     Running   0          109s
nginx-deployment-657df44b4f-vv2wh   1/1     Running   0          109s


#deployment的名稱為nginx-deployment
#replicasets的名稱為deployment名稱后添加隨機數nginx-deployment-5899cb477c
#pod的名稱為replicasets名稱后再添加隨機數nginx-deployment-5899cb477c-6dd2x nginx-deployment-5899cb477c-8j2fr nginx-deployment-5899cb477c-jst94 。
#因此deployment的創建順序為deployment--->replicasets--->pod

3、使用deployment對pod升級

更新deployment.yml文件,再kubectl apply -f nginx-deploy.yml即可
把nginx鏡像文件從1.14.2升級為1.16.0
副本數量改為5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# vim nginx-deploy.yml 
apiVersion: apps/v1
kind: Deployment #類型為deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 5  #修改副本數
  selector:    
    matchLabels:
      app: nginx
  template:   
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.0 #更改鏡像文件
        ports:
        - containerPort: 80
        
root@k8s-master01:/apps/k8s-yaml/deployment-case# cp nginx-deploy.yml  nginx-deploy.yml.bak
#注意:變更deployment.yml時,注意保存原文件。

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml 
deployment.apps/nginx-deployment configured

#deployment副本數變了5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments -n default
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5/5     5            5           10m

#replicasets已經更新也變了5個,老的在一段時間后會自動消失
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get replicasets -n default
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-657df44b4f   0         0         0       11m
nginx-deployment-9fc7f565     5         5         5       95s

#pod副本升級為5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n default
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-9fc7f565-8nwng   1/1     Running   0          118s
nginx-deployment-9fc7f565-cg99d   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-cxcn8   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-czdnm   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-tgqwh   1/1     Running   0          118s

#查看pod的image也升級了nginx:1.16.0
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe pod nginx-deployment-9fc7f565-8nwng|grep "Image"
    Image:          nginx:1.16.0
    Image ID:       docker-pullable://nginx@sha256:3e373fd5b8d41baeddc24be311c5c6929425c04cabf893b874ac09b72a798010



root@k8s-master01:/apps/k8s-yaml/deployment-case#  kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>  #創建deployment
2         <none>  #第一次升級deployment


注意: v1.20之前的版本需要使用deployment做回滾的時候,需要在創建deployment時添加"--record"參數
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml --record

deployment對pod升級默認為滾動升級:也是K8s對Pod升級的默認策略,通過使用新版本Pod逐步更新舊版本Pod,實現零停機發布,用戶無感知。

 

4、deployment滾動升級策略

vim roll-deploy.yml

apiVersion: apps/v1
kind: Deployment 
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 5 
  revisionHistoryLimit: 10 # RS歷史版本保存數量
  selector:    
    matchLabels:
      app: nginx
  #滾動升級策略
  strategy:
    rollingUpdate:
      #maxSurge:滾動更新過程中最大Pod副本數,確保在更新時啟動的Pod數量比期望(replicas)Pod數量最大多出25%
      maxSurge: 25% 
      #maxUnavailable:滾動更新過程中最大不可用Pod副本數,確保在更新時最大25%Pod數量不可用,即確保75%Pod數量是可用狀態。
      maxUnavailable: 25%
    type: RollingUpdate  #類型為滾動升級
  template:   
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

執行升級

#將nginx鏡像該為nginx:1.18.0來升級roll-deploy.yaml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f roll-deploy.yaml 
deployment.apps/roll-deployment configured

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments roll-deployment -n default
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
roll-deployment   5/5     5            5           4m53s

root@k8s-master01:/apps/k8s-yaml/deployment-case#  kubectl get replicasets -n default
NAME                         DESIRED   CURRENT   READY   AGE
roll-deployment-67dfd6c8f9   5         5         5       42s
roll-deployment-75d4475c89   0         0         0       3m38s

NAME                               READY   STATUS    RESTARTS   AGE
roll-deployment-67dfd6c8f9-59cv2   1/1     Running   0          112s
roll-deployment-67dfd6c8f9-cqgn8   1/1     Running   0          2m19s
roll-deployment-67dfd6c8f9-jlkbs   1/1     Running   0          112s
roll-deployment-67dfd6c8f9-vdxjh   1/1     Running   0          2m19s
roll-deployment-67dfd6c8f9-x5dhc   1/1     Running   0          2m18s



root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe deployments roll-deployment 
Name:                   roll-deployment
Namespace:              default
CreationTimestamp:      Sat, 02 Oct 2021 21:04:50 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.18.0
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   roll-deployment-67dfd6c8f9 (5/5 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m41s  deployment-controller  Scaled up replica set roll-deployment-75d4475c89 to 5
  Normal  ScalingReplicaSet  2m45s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 2
  Normal  ScalingReplicaSet  2m45s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 4
  Normal  ScalingReplicaSet  2m44s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 3
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 3
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 4
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 2
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 5
  Normal  ScalingReplicaSet  2m14s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 1
  Normal  ScalingReplicaSet  2m10s  deployment-controller  (combined from similar events): Scaled down replica set roll-deployment-75d4475c89 to 0

5、deployment的水平擴容

方法一:命令(不推薦)
kubectl scale deployment web --replicas=10
方法二:編寫deployment.yml
修改yaml里replicas值,再kubectl apply -f deploment.yml
#注意:保留原deployment.yml文件

replicas參數控制Pod副本數量

6、deployment的回滾

kubectl rollout history deployments roll-deploy # 查看歷史發布版本
kubectl rollout undo deployments roll-deploy # 回滾上一個版本
kubectl rollout undo deployments roll-deploy --to-revision=2 # 回滾歷史指定版本
注:回滾是重新部署某一次部署時的狀態,即當時版本所有配置

建議:編寫deployment.yml文件,再kubectl apply -f deployment.yml
注意保存deployment.yml原文件
因為kubectl rollout history deployments roll-deploy 無法查看當前pod的具體信息

7、deployment的刪除

方法一:
kubectl delete -f deployment.yml
方法二:
kubectl delete deployments roll-deploy

8、Deployment:ReplicaSet

ReplicaSet控制器用途: •Pod副本數量管理,不斷對比當前Pod數量與期望Pod數量 •Deployment每次發布都會創建一個RS作為記錄,用於實現回滾

kubectl get rs #查看RS記錄
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get rs -n default
NAME                         DESIRED   CURRENT   READY   AGE
roll-deployment-67dfd6c8f9   5         5         5       10m
roll-deployment-75d4475c89   0         0         0       13m


kubectl rollout history deployment roll-deploy #版本對應RS記錄
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl rollout history deployments roll-deployment -n default
deployment.apps/roll-deployment 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

9、金絲雀發布升級

Deployment資源允許用戶控制更新過程中的滾動節奏,例如“暫停”或“繼續”更新操作,尤其是借助於前文講到的maxSurge和maxUnavailable屬性還能實現更為精巧的過程控制。例如,在第一批新的Pod資源創建完成后立即暫停更新過程,此時,僅有一小部分新版本的應用存在,主體部分還是舊的版本。然后,通過應用層路由機制根據請求特征精心篩選出小部分用戶的請求路由至新版本的Pod應用,並持續觀察其是否能穩定地按期望方式運行。默認,Service只會隨機或輪詢地將用戶請求分發給所有的Pod對象。確定沒有問題后再繼續進行完余下的所有Pod資源的滾動更新,否則便立即回滾至第一步更新操作。這便是所謂的金絲雀部署。

為了盡可能降低對現有系統及其容量的影響,基於Deployment的金絲雀發布過程通常建議采用“先增后減且可用Pod對象總數不低於期望值”的方式進行。首次添加的Pod對象數量取決於其接入的第一批請求的規則及單個Pod的承載能力,視具體需求而定,為了能更簡單地說明問題,接下來采用首批添加1個Pod資源的方式。我們將Deployment控制器的maxSurge屬性的值設置為1,並將maxUnavailable屬性的值設置為0就能完成設定。

roll-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: roll-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  revisionHistoryLimit: 10 # RS歷史版本保存數量
  selector:
    matchLabels:
      app: nginx
  #滾動升級策略
  strategy:
    rollingUpdate:
      #maxSurge:滾動更新過程中最大Pod副本數,確保在更新時啟動的Pod數量比期望(replicas)Pod數量最大多出25%
      maxSurge: 1
      #maxUnavailable:滾動更新過程中最大不可用Pod副本數,確保在更新時最大25%Pod數量不可用,即確保75%Pod數量是可用狀態。
      maxUnavailable: 0
    type: RollingUpdate  #類型為滾動升級
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

運行清單

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f roll-deploy.yaml 
deployment.apps/roll-deployment created

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
roll-deployment-75d4475c89-4hbkg   1/1     Running   0          72s
roll-deployment-75d4475c89-bdt8k   1/1     Running   0          72s
roll-deployment-75d4475c89-f8jxn   1/1     Running   0          72s

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments.apps 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
roll-deployment   3/3     3            3           2m22s

使用金絲雀升級

#將nginx1.16.0升級為nginx1.18.0
#升級
#kubectl set image deployment roll-deployment nginx=nginx:1.18.0 -n default 
#暫定升級
#kubectl rollout pause deployment roll-deployment -n default
#繼續升級
#kubectl rollout resume  deployment roll-deployment -n default
#升級失敗回滾
#kubectl rollout undo deployments roll-deployment -n default

#1)升級depolymnet
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl set image deployment roll-deployment nginx=nginx:1.18.0 -n default
deployment.apps/roll-deployment image updated
#2)暫定升級
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl rollout pause deployment roll-deployment -n default
deployment.apps/roll-deployment paused
#3)驗正
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments.apps roll-deployment 
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
roll-deployment   4/3     2            4           8m37s
#按照升級策略臨時新增一個deploymnet
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
roll-deployment-67dfd6c8f9-5w9d9   1/1     Running   0          2m58s
roll-deployment-67dfd6c8f9-jwj52   1/1     Running   0          2m57s
roll-deployment-75d4475c89-4hbkg   1/1     Running   0          10m
roll-deployment-75d4475c89-f8jxn   1/1     Running   0          10m
#pod也會臨時新增一個

#升級了2個pod,新老版本暫時共存
#新版本
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe pod roll-deployment-67dfd6c8f9-5w9d9
......
 Image:          nginx:1.18.0
......

#老版本
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe pod roll-deployment-75d4475c89-f8jxn
......
 Image:          nginx:1.16.0
......

#4)升級測試沒有問題繼續升級
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl rollout resume  deployment roll-deployment -n default
deployment.apps/roll-deployment resumed

#5)升級測試失敗則直接回滾
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl rollout undo deployments roll-deployment -n default

 

 

三、DaemonSet

1、DaemonSet介紹

DaemonSet是Pod控制器的又一種實現,用於在集群中的全部節點上同時運行一份指定的Pod資源副本,后續新加入集群的工作節點也會自動創建一個相關的Pod對象,當從集群移除節點時,此類Pod對象也將被自動回收而無須重建。管理員也可以使用節點選擇器及節點標簽指定僅在部分具有特定特征的節點上運行指定的Pod對象。

DaemonSet是一種特殊的控制器,它有特定的應用場景,通常運行那些執行系統級操作任務的應用,其應用場景具體如下。·運行集群存儲的守護進程,如在各個節點上運行glusterd或ceph。·在各個節點上運行日志收集守護進程,如fluentd和logstash。·在各個節點上運行監控系統的代理守護進程,如Prometheus Node Exporter、collectd、Datadog agent、New Relic agent或Ganglia gmond等。 當然,既然是需要運行於集群內的每個節點或部分節點,於是很多場景中也可以把應用直接運行為工作節點上的系統級守護進程,不過,這樣一來就失去了運用Kubernetes管理所帶來的便捷性。另外,也只有必須將Pod對象運行於固定的幾個節點並且需要先於其他Pod啟動時,才有必要使用DaemonSet控制器,否則就應該使用Deployment控制器。

DaemonSet功能: •在每一個Node上運行一個Pod •新加入的Node也同樣會自動運行一個Pod 應用場景:網絡插件(kube-proxy、calico)、其他Agent。

2、在每個node上部署一個日志采集

編寫daemonset.yml

apiVersion: apps/v1
kind: DaemonSet #類型為Daemonset
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:
      - name: log
        image: elastic/filebeat:7.3.2

運行daemonset.yml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f daemonset.yaml 
daemonset.apps/filebeat created


root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n kube-system -o wide|grep filebeat
filebeat-4z59k                             1/1     Running   0          3m37s   172.20.32.129    172.168.33.207   <none>           <none>
filebeat-d2hfc                             1/1     Running   0          3m37s   172.20.135.162   172.168.33.212   <none>           <none>
filebeat-jdqdl                             1/1     Running   0          3m37s   172.20.122.130   172.168.33.209   <none>           <none>
filebeat-mg6nb                             1/1     Running   0          3m37s   172.20.85.249    172.168.33.210   <none>           <none>
filebeat-vzkt9                             1/1     Running   0          3m37s   172.20.58.212    172.168.33.211   <none>           <none>
filebeat-wlxnv                             1/1     Running   0          3m37s   172.20.122.129   172.168.33.208   <none>           <none>

會自動的在每個節點上部署一個fIlebeat的pod,有新增節點該pod會自動在新節點上部署,有節點需要下線,該pod會自動從下線節點刪除。

四、Job

1、Job的介紹

Job控制器用於調配Pod對象運行一次性任務,容器中的進程在正常運行結束后不會對其進行重啟,而是將Pod對象置於“Completed”(完成)狀態。若容器中的進程因錯誤而終止,則需要依配置確定重啟與否,未運行完成的Pod對象因其所在的節點故障而意外終止后會被重新調度。

應用場景:離線數據處理,視頻解碼等業務。

2、Job的部署

編寫job.yml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

運行job.yml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f job.yml 
job.batch/pi created

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get job -o wide
NAME   COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES   SELECTOR
pi     1/1           52s        82s   pi           perl     controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe job/pi
Name:           pi
Namespace:      default
Selector:       controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
Labels:         controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
                job-name=pi
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Tue, 13 Apr 2021 16:43:54 +0800
Completed At:   Tue, 13 Apr 2021 16:44:46 +0800
Duration:       52s
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
           job-name=pi
  Containers:
   pi:
    Image:      perl
    Port:       <none>
    Host Port:  <none>
    Command:
      perl
      -Mbignum=bpi
      -wle
      print bpi(2000)
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  109s  job-controller  Created pod: pi-7nbgv
  Normal  Completed         57s   job-controller  Job completed

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
pi-7nbgv                            0/1     Completed   0          2m56s
#該pod pi-7nbgv已經執行成功

[root@k8s-master01 apps]# kubectl logs pi-7nbgv
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895。。。。。。

五、CronJob

1、CronJob的介紹

CronJob控制器用於管理Job控制器資源的運行時間。Job控制器定義的作業任務在其控制器資源創建之后便會立即執行,但CronJob可以以類似於Linux操作系統的周期性任務作業計划(crontab)的方式控制其運行的時間點及重復運行的方式,具體如下。·在未來某時間點運行作業一次。·在指定的時間點重復運行作業。 CronJob對象支持使用的時間格式類似於Crontab,略有不同的是,CronJob控制器在指定的時間點時,“”和“*”的意義相同,都表示任何可用的有效值。

CronJob用於實現定時任務,像Linux的Crontab一樣。 •定時任務 應用場景:通知,備份

2、CronJob部署

每分鍾輸出一個hello

編寫cronjob.yml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

運行cronjob

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f cronjob.yml 
cronjob.batch/hello created

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
hello-1618304040-mr2v5              0/1     Completed   0          49s

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl logs hello-1618304040-mr2v5
Tue Apr 13 08:54:02 UTC 2021
Hello from the Kubernetes cluster
#每分鍾會執行一次
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
hello-1618304040-mr2v5              0/1     Completed   0          2m40s
hello-1618304100-mzcs9              0/1     Completed   0          100s
hello-1618304160-hf56t              0/1     Completed   0          39s

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe cronjob hello
。。。。。。
Events:
  Type    Reason            Age    From                Message
  ----    ------            ----   ----                -------
  Normal  SuccessfulCreate  5m17s  cronjob-controller  Created job hello-1618304040
  Normal  SawCompletedJob   5m7s   cronjob-controller  Saw completed job: hello-1618304040, status: Complete
  Normal  SuccessfulCreate  4m17s  cronjob-controller  Created job hello-1618304100
  Normal  SawCompletedJob   4m7s   cronjob-controller  Saw completed job: hello-1618304100, status: Complete
  Normal  SuccessfulCreate  3m16s  cronjob-controller  Created job hello-1618304160
  Normal  SawCompletedJob   3m6s   cronjob-controller  Saw completed job: hello-1618304160, status: Complete
  Normal  SuccessfulCreate  2m16s  cronjob-controller  Created job hello-1618304220
  Normal  SuccessfulDelete  2m6s   cronjob-controller  Deleted job hello-1618304040
  Normal  SawCompletedJob   2m6s   cronjob-controller  Saw completed job: hello-1618304220, status: Complete
  Normal  SuccessfulCreate  76s    cronjob-controller  Created job hello-1618304280
  Normal  SawCompletedJob   66s    cronjob-controller  Saw completed job: hello-1618304280, status: Complete
  Normal  SuccessfulDelete  66s    cronjob-controller  Deleted job hello-1618304100
  Normal  SuccessfulCreate  15s    cronjob-controller  Created job hello-1618304340
  Normal  SawCompletedJob   5s     cronjob-controller  Saw completed job: hello-1618304340, status: Complete
  Normal  SuccessfulDelete  5s     cronjob-controller  Deleted job hello-1618304160

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM