linux運維、架構之路-K8s二進制版本升級


一、升級說明

       Kubernetes集群小版本升級基本上是只需要更新二進制文件即可。如果大版本升級需要注意kubelet參數的變化,以及其他組件升級之后的變化。 由於Kubernetes版本更新過快許多依賴並沒有解決完善,並不建議生產環境使用較新版本。

二、軟件准備

1、下載地址

https://github.com/kubernetes/kubernetes/releases

2、目前集群版本信息

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   336d   v1.16.0
k8s-node2   Ready    <none>   336d   v1.16.0
k8s-node3   Ready    <none>   336d   v1.16.0
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

3、升級目標版本v1.18.15

wget https://dl.k8s.io/v1.18.15/kubernetes-server-linux-amd64.tar.gz

三、開始升級

1、Master節點升級

①kubectl工具升級(在所有master節點操作)

備份:

cd /usr/bin && mv kubectl{,.bak_2021-03-02}

升級:

tar xf kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/kubectl /usr/bin/

升級前:

[root@k8s-node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

升級后:

[root@k8s-node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

因為apiserver還沒有升級,所以在Server Version中顯示為1.16.0版本

②組件升級(生產環境可以一台一台的升級替換)

#在所有master節點執行 (升級生產環境可以不停止keeplived)
systemctl stop keepalived   #先停掉本機keepalived,切走高可用VIP地址
systemctl stop kube-apiserver
systemctl stop kube-scheduler
systemctl stop kube-controller-manager

備份:

cd /app/kubernetes/bin
mv kube-apiserver{,.bak_2021-03-02}
mv kube-controller-manager{,.bak_2021-03-02}
mv kube-scheduler{,.bak_2021-03-02}

升級:

cp kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /app/kubernetes/bin/

啟動keepalived和apiserver:

systemctl start keepalived
systemctl start kube-apiserver

查看啟動日志是否有異常:

journalctl -fu kube-apiserver

查看:

[root@k8s-node1 ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                           
etcd-1               Healthy     {"health":"true"}                                                                           
etcd-2               Healthy     {"health":"true"} 

可以查看到etcd中的數據說明kube-apiserver沒有問題

[root@k8s-node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:14:05Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

查看客戶端和服務端的版本都是v1.18.15,說明版本相同沒有問題

啟動其他組件:

systemctl start kube-controller-manager && systemctl start kube-scheduler

查看啟動狀態,此時kubernetes集群已經恢復:

[root@k8s-node1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}

2、node組件升級

①停止服務,並且備份二進制文件(所有node節點)

systemctl stop kubelet
systemctl stop kube-proxy

備份:

cd /app/kubernetes/bin
mv kubelet{,.bak_2021-03-02}
mv kube-proxy{,.bak_2021-03-02}

升級:

scp kubernetes/server/bin/{kubelet,kube-proxy} /app/kubernetes/bin/        #分發所有節點

啟動kubelet服務:

systemctl daemon-reload && systemctl start kubelet

查看kubelet日志,檢查是否有報錯

journalctl -fu kubelet

過幾分鍾查看節點是否正常:

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   336d   v1.18.15
k8s-node2   Ready    <none>   336d   v1.18.15
k8s-node3   Ready    <none>   336d   v1.18.15

啟動kube-proxy服務:

systemctl start kube-proxy

 驗證集群的狀態:

[root@k8s-node1 ~]# systemctl status kube-proxy|grep Active
   Active: active (running) since Tue 2021-03-02 17:31:32 CST; 33s ago
[root@k8s-node1 ~]# kubectl cluster-info 
Kubernetes master is running at http://localhost:8080
CoreDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   337d   v1.18.15
k8s-node2   Ready    <none>   337d   v1.18.15
k8s-node3   Ready    <none>   337d   v1.18.15

四、創建應用測試

1、測試yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

2、查看創新的Pod

[root@k8s-node1 ~]# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
busybox 1/1     Running     0 14s
cronjob-demo-1614661200-mth6n       0/1     Completed   0          4h37m
cronjob-demo-1614664800-2s9w5       0/1     Completed   0          3h37m
cronjob-demo-1614668400-jt7ld       0/1     Completed   0          157m
cronjob-demo-1614672000-8zx6j       0/1     Completed   0          82m
cronjob-demo-1614675600-whjvf       0/1     Completed   0          37m
job-demo-ss6mf                      0/1     Completed   0          12d
kubia-deployment-7b5fb95f85-fzn4d   1/1     Running     0          33d
static-web-k8s-node2                1/1     Running     0          64m
tomcat1-79c47dc8f-75mr4             1/1     Running     3          41d
tomcat2-8fd94cc4-z9mmq              1/1     Running     4          41d
tomcat3-6488575ff7-csmsh            1/1     Running     4          41d

3、測試coredns功能是否正常

[root@k8s-node1 ~]# kubectl exec -it busybox -- nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

至此k8s二進制版本升級完畢


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM