k8s中藍綠部署、金絲雀發布、滾動更新


Kubernetes Ingress 實現藍綠部署

https://blog.csdn.net/ljx1528/article/details/108801579

Kubernetes藍綠部署,金絲雀發布,滾動更新的介紹

金絲雀發布(又稱灰度發布、灰度更新):

金絲雀發布一般是先發1台機器,或者一個小比例,例如2%的服務器,主要做流量驗證用,也稱為金絲雀 (Canary) 測試,國內常稱灰度測試。以前曠工下礦前,會先放一只金絲雀進去用於探測洞里是否有有毒氣體,看金絲雀能否活下來,金絲雀發布由此得名。簡單的金絲雀測試一般通過手工測試驗證,復雜的金絲雀測試需要比較完善的監控基礎設施配合,通過監控指標反饋,觀察金絲雀的健康狀況,作為后續發布或回退的依據。如果金絲測試通過,則把剩余的 V1 版本全部升級為 V2 版本。如果金絲雀測試失敗,則直接回退金絲雀,發布失敗。

滾動更新:

在金絲雀發布基礎上的進一步優化改進,是一種自動化程度較高的發布方式,用戶體驗比較平滑,是目前成熟型技術組織所采用的主流發布方式。一次滾動式發布一般由若干個發布批次組成,每批的數量一般是可以配置的(可以通過發布模板定義)。例如,第一批1台(金絲雀),第二批10%,第三批 50%,第四批100%。每個批次之間留觀察間隔,通過手工驗證或監控反饋確保沒有問題再發下一批次,所以總體上滾動式發布過程是比較緩慢的 (其中金絲雀的時間一般會比后續批次更長,比如金絲雀10 分鍾,后續間隔 2分鍾)。

藍綠部署:

一些應用程序只需要部署一個新版本,並需要立即切到這個版本。因此,我們需要執行藍/綠部署。在進行藍/綠部署時,應用程序的一個新副本(綠)將與現有版本(藍)一起部署。然后更新應用程序的入口/路由器以切換到新版本(綠)。然后,您需要等待舊(藍)版本來完成所有發送給它的請求,但是大多數情況下,應用程序的流量將一次更改為新版本;Kubernetes不支持內置的藍/綠部署。目前最好的方式是創建新的部署,然后更新應用程序的服務(如service)以指向新的部署;藍綠部署是不停老版本,部署新版本然后進行測試,確認OK后將流量逐步切到新版本。藍綠部署無需停機,並且風險較小。

Deployment定義

Deployment實現更新邏輯和更新策略是借助於ReplicaSet完成的,Deployment這種資源對象可以定義的字段有哪些,通過如下命令查看:

kubectl  explain  deploy

KIND:     Deployment
VERSION:  extensions/v1beta1
DESCRIPTION:
     DEPRECATED - This group version of Deployment is deprecated by
     apps/v1beta2/Deployment. See the release notes for more information.
     Deployment enables declarative updates for Pods and ReplicaSets.
#我們使用apps/v1
FIELDS:
   apiVersion  <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
   kind  <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
   metadata  <Object>
     Standard object metadata.
   spec  <Object>
     Specificatio of the desired behavior of the Deployment.
   status <Object>
     Most recently observed status of the Deployment.

 kubectl explain deploy.spec

KIND:     Deployment
VERSION:  extensions/v1beta1
RESOURCE: spec <Object>
DESCRIPTION:
     Specification of the desired behavior of the Deployment.
     DeploymentSpec is the specification of the desired behavior of the
     Deployment.
FIELDS:
   minReadySeconds  <integer>
     Minimum number of seconds for which a newly created pod should be ready
     without any of its container crashing, for it to be considered available.
     Defaults to 0 (pod will be considered available as soon as it is ready)
   paused <boolean>
     Indicates that the deployment is paused and will not be processed by the
     deployment controller.
#暫停,當我們更新的時候創建pod先暫停,不是立即更新
   progressDeadlineSeconds <integer>
     The maximum time in seconds for a deployment to make progress before it is
     considered to be failed. The deployment controller will continue to process
     failed deployments and a condition with a ProgressDeadlineExceeded reason
     will be surfaced in the deployment status. Note that progress will not be
     estimated during the time a deployment is paused. This is not set by
     default.
   replicas <integer>
     Number of desired pods. This is a pointer to distinguish between explicit
     zero and not specified. Defaults to 1.
   revisionHistoryLimit  <integer>
     The number of old ReplicaSets to retain to allow rollback. This is a
     pointer to distinguish between explicit zero and not specified.
#保留的歷史版本數,默認是10個
   rollbackTo  <Object>
     DEPRECATED. The config this deployment is rolling back to. Will be cleared
     after rollback is done.
   selector  <Object>
     Label selector for pods. Existing ReplicaSets whose pods are selected by
     this will be the ones affected by this deployment.
   strategy  <Object>
     The deployment strategy to use to replace existing pods with new ones.
#更新策略,支持的滾動更新策略
   template  <Object> -required-
     Template describes the pods that will be created.

 kubectl explain deploy.spec.strategy

KIND:     Deployment
VERSION:  extensions/v1beta1
RESOURCE: strategy <Object>
DESCRIPTION:
     The deployment strategy to use to replace existing pods with new ones.
     DeploymentStrategy describes how to replace existing pods with new ones.
FIELDS:
   rollingUpdate  <Object>
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.
   type  <string>
     Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
     RollingUpdate.
#支持兩種更新,Recreate和RollingUpdate
#Recreate是重建式更新,刪除一個更新一個
#RollingUpdate 滾動更新,定義滾動更新的更新方式的,也就是pod能多幾個,少幾個,控制更新力度的

 kubectl explain deploy.spec.strategy.rollingUpdate

KIND:     Deployment
VERSION:  extensions/v1beta1
RESOURCE: rollingUpdate <Object>
DESCRIPTION:
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.
     Spec to control the desired behavior of rolling update.
FIELDS:
   maxSurge  <string>
     The maximum number of pods that can be scheduled above the desired number
     of pods. Value can be an absolute number (ex: 5) or a percentage of desired
     pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
     is calculated from percentage by rounding up. By default, a value of 1 is
     used. Example: when this is set to 30%, the new RC can be scaled up
     immediately when the rolling update starts, such that the total number of
     old and new pods do not exceed 130% of desired pods. Once old pods have
     been killed, new RC can be scaled up further, ensuring that total number of
     pods running at any time during the update is atmost 130% of desired pods.
#我們更新的過程當中最多允許超出的指定的目標副本數有幾個;
它有兩種取值方式,第一種直接給定數量
第二種根據百分比,百分比表示原本是5個,最多可以超出20%,那就允許多一個
最多可以超過40%,那就允許多兩個
   maxUnavailable <string>
     The maximum number of pods that can be unavailable during the update. Value
     can be an absolute number (ex: 5) or a percentage of desired pods (ex:
     10%). Absolute number is calculated from percentage by rounding down. This
     can not be 0 if MaxSurge is 0. By default, a fixed value of 1 is used.
     Example: when this is set to 30%, the old RC can be scaled down to 70% of
     desired pods immediately when the rolling update starts. Once new pods are
     ready, old RC can be scaled down further, followed by scaling up the new
     RC, ensuring that the total number of pods available at all times during
     the update is at least 70% of desired pods.
#最多允許幾個不可用

 Deployment部署應用的更新策略演示

假設有5個副本,最多一個不可用,就表示最少有4個可用,deployment是一個三級結構,deployment控制replicaset,replicaset控制pod,用deployment創建一個pod
cd  /root/demo-test

創建一個deploy-demo.yaml文件
deploy-demo.yaml配置文件內容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: test
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: myapp:v1
        ports:
        - name: http
          containerPort: 80

kubectl apply -f deploy-demo.yaml
kubectl get deploy  -n test

可看到創建的控制器名字是myapp-deploy
期望的pod值是2,當前值是2,就緒狀態是2,可用的是2

kubectl   get  rs   -n test

上面表示創建deploy的時候也會創建一個rs(replicaset),7657db6c59這個隨機數字是我們引用pod的模板template的名字的hash值

kubectl  get  pods    -n test

deploy在實現中心應用時,可以直接編輯配置文件實現,比方說想要修改副本數,把兩個變成3個,打開deploy-demo.yaml,直接修改replicas數量,把2變成3,修改之后保存退出,執行如下

kubectl apply -f deploy-demo.yaml

 apply不同於create,apply可以執行多次;create執行一次,再執行就會報錯有重復。

kubectl get pods

可以看到pod副本數變成了3個
kubectl describe deploy myapp

查看myapp這個控制器的詳細信息

Name:                   myapp-deploy
Namespace:              default
CreationTimestamp:      Thu, 27 Dec 2018 15:47:48 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration=
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-
deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
Selector:               app=myapp,release=canary
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
#默認的更新策略rollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
#最多允許多25%個pod,25%表示不足一個,可以補一個
Pod Template:
  Labels:  app=myapp
           release=canary
  Containers:
   myapp:
    Image:        ikubernetes/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deploy-69b47bc96d (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  26m   deployment-controller  Scaled up replica set myapp-
deploy-69b47bc96d to 2
  Normal  ScalingReplicaSet  4m    deployment-controller  Scaled up replica set myapp-
deploy-69b47bc96d to 3

 案例演示

例1:金絲雀發布

打開一個標簽監測更新過程
kubectl  get pods -l app=myapp  -n test -w,(也可以使用kubectl rollout statusdeployment myapp-deploy,顯示Waiting for deployment "myapp-deploy"rollout to finish: 1 out of 5 new replicas have been updated),下面命令執行完之后顯示如下,之前的pod還在,新創建了一個pod,沒有立即刪除。

打開另一個標簽操作如下:
kubectl  set image deployment  myapp-deploy   myapp:v2  -n test &&  kubectl  rolloutpause deployment  myapp-deploy  -n  test
注:上面的步驟解釋說明
把myapp這個容器的鏡像更新到myapp:v2版本,更新鏡像之后,創建一個新的pod就立即暫停,這就是我們說的金絲雀發布;如果暫停幾個
小時之后沒有問題,那么取消暫停,就會依次執行后面步驟,把所有pod都升級。(1)解除暫停

打開一個新的標簽
kubectl  get pods -l app=myapp  -n test -w
打開另一個標簽
kubectl  rollout  resume  deployment myapp-deploy -n test
在剛才監測的界面可以看到如下一些信息,下面過程是把余下的pod里的容器都更新到新的版本:

myapp-deploy-6bdcd6755d-llrw8   0/1       Pending   0         0s
myapp-deploy-6bdcd6755d-llrw8   0/1       ContainerCreating   0         0s
myapp-deploy-67f6f6b4dc-7cs8v   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-7cs8v   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-7cs8v   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-7cs8v   0/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-llrw8   1/1       Running   0         16s
myapp-deploy-67f6f6b4dc-nhcp2   1/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-r4mrl   0/1       Pending   0         0s
myapp-deploy-6bdcd6755d-r4mrl   0/1       Pending   0         1s
myapp-deploy-6bdcd6755d-r4mrl   0/1       ContainerCreating   0         1s
myapp-deploy-67f6f6b4dc-nhcp2   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-nhcp2   0/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-r4mrl   1/1       Running   0         5s
myapp-deploy-67f6f6b4dc-hwx7w   1/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-j8nj8   0/1       Pending   0         0s
myapp-deploy-6bdcd6755d-j8nj8   0/1       Pending   0         0s
myapp-deploy-6bdcd6755d-j8nj8   0/1       ContainerCreating   0         0s
myapp-deploy-67f6f6b4dc-nhcp2   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-nhcp2   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-hwx7w   0/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-j8nj8   1/1       Running   0         4s
myapp-deploy-67f6f6b4dc-dbcqh   1/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-lpk5b   0/1       Pending   0         1s
myapp-deploy-6bdcd6755d-lpk5b   0/1       Pending   0         1s
myapp-deploy-6bdcd6755d-lpk5b   0/1       ContainerCreating   0         1s
myapp-deploy-67f6f6b4dc-dbcqh   0/1       Terminating   0         1h
myapp-deploy-6bdcd6755d-lpk5b   1/1       Running   0         4s
myapp-deploy-67f6f6b4dc-b4wfc   1/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-b4wfc   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-hwx7w   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-hwx7w   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-b4wfc   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-b4wfc   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-dbcqh   0/1       Terminating   0         1h
myapp-deploy-67f6f6b4dc-dbcqh   0/1       Terminating   0         1h

kubectl get rs  -n test

可以看到replicaset控制器有2個了

(2)回滾

如果發現剛才升級的這個版本有問題可以回滾,查看當前有哪幾個版本:

kubectl  rollout  history deployment myapp-deploy -n test

顯示如下:

上面可以看到第一版沒了,被還原成了第三版,第三版的前一版是第二版

kubectl get rs -n test -o wide     

顯示如下:

可以看到上面的rs已經用第一個了,這個就是還原之后的rs

例2:滾動更新

(1) kubectl  get pods  -l app=myapp -n test -w   

在一個窗口監測,打開另外一個窗口按如下操作:
cd   /root/demo_test
cat  deploy-demo.yaml
把myapp:v1改成myapp:v3

保存退出,執行如下
kubectl apply -f deploy-demo.yaml

再在監測的那個窗口可以看到信息如下:

pending表示正在進行調度,ContainerCreating表示正在創建一個pod,running表示運行一個pod,running起來一個pod之后再Terminating(停掉)一個pod,以此類推,直到所有pod完成滾動升級

在另外一個窗口執行kubectl  get rs -n test,顯示如下:

上面可以看到rs有兩個,上面那個是升級之前的,已經被停掉,但是可以隨時回滾,kubectl rollout history deployment myapp-deploy  -n test

查看myapp-deploy這個控制器的滾動歷史,顯示如下:

回滾的話操
kubectl rollout undo作如下:

(2) 擴容到5個

cat  deploy-demo.yaml

修改replicas數值是5

kubectl apply -f deploy-demo.yaml

kubectl get pods   -n test顯示如下:

上面說明擴容成功了

(3)修改maxSurge和maxUnavailable用來控制滾動更新的更新策略

修改更新策略最多不可用0個,也就是少不能少於5個,最大不能超過6個

kubectl  patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' -n  test

kubectl describe deployment myapp-deploy  -n test

查看myapp-deploy這個控制器的詳細信息

上面可以看到RollingUpdateStrategy:  0 max unavailable, 1 max surge  
這個rollingUpdate更新策略變成了剛才設定的,因為我們設定的pod副本數是5,0和1表示最少不能少於5個pod,最多不能超過6個pod
這個就是通過控制RollingUpdateStrategy這個字段來設置滾動更新策略的

例3:藍綠部署

lan.yaml配置文件內容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  namespace: blue-green
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
     labels:
      app: myapp
      version: v1
    spec:
       containers:
       - name: myapp
         image: janakiramm/myapp:v1
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 80
lv.yaml配置文件內容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
  namespace: blue-green
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
     labels:
       app: myapp
       version: v2
    spec:
      containers:
      - name: myapp
        image: janakiramm/myapp:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
service_lanlv.yaml 配置文件內容如下:
apiVersion: v1
kind: Service
metadata:
  name: myapp-lan
  namespace: blue-green
  labels:
     app: myapp
     version: v1
spec:
   type: NodePort
   ports:
   - port: 80
     nodePort: 30062
     name: http
   selector:
     app: myapp
     version: v1

cd  /root/demo_test/lanlv
kubectl apply -f lan.yaml
kubectl apply  -f lv.yaml
kubectl get pods -n blue-green  
顯示如下:

上面可以看到有兩個pod,一個是myapp-v1這個是藍程序(升級之前的程序),一個是myapp-v2這個是綠程序(升級之后的程序),藍綠程序一起運行kubectl apply -f service_lanlv.yaml

kubectl get svc -n blue-green

在瀏覽器訪問http://<k8s集群任何一個節點ip>:port    顯示如下:

修改service_lanlv.yaml 配置文件,修改標簽,讓其匹配到綠程序(升級之后的程序)

 service_lv.yaml 文件內容如下:

apiVersion: v1
kind: Service
metadata:
  name: myapp-lan
  namespace: blue-green
  labels:
     app: myapp
     version: v2
spec:
   type: NodePort
   ports:
   - port: 80
     nodePort: 30062
     name: http
   selector:
     app: myapp
     version: v2
kubectl apply -f service_lv.yaml

 kubectl get svc -n blue-green   顯示如下:

在瀏覽器訪問http://<k8s集群任何一個節點ip>:port   

顯示如下:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM