k8s集群升級


集群升級

由於課程中的集群版本是 v1.10.0,這個版本相對有點舊了,最新版本都已經 v1.14.x 了,為了盡量保證課程內容的更新度,所以我們需要將集群版本更新。我們的集群是使用的 kubeadm 搭建的,我們知道使用 kubeadm 搭建的集群來更新是非常方便的,但是由於我們這里版本跨度太大,不能直接從 1.10.x 更新到 1.14.x,kubeadm 的更新是不支持跨多個主版本的,所以我們現在是 1.10,只能更新到 1.11 版本了,然后再重 1.11 更新到 1.12...... 不過版本更新的方式方法基本上都是一樣的,所以后面要更新的話也挺簡單了,下面我們就先將集群更新到 v1.11.0 版本。

更新集群

首先我們保留 kubeadm config 文件:

$ kubeadm config view
api:
  advertiseAddress: 10.151.30.11
  bindPort: 6443
  controlPlaneEndpoint: "" auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" authorizationModes: - Node - RBAC certificatesDir: /etc/kubernetes/pki cloudProvider: "" criSocket: /var/run/dockershim.sock etcd: caFile: "" certFile: "" dataDir: /var/lib/etcd endpoints: null image: "" keyFile: "" imageRepository: k8s.gcr.io kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpIdleTimeout: 250ms kubeletConfiguration: {} kubernetesVersion: v1.10.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 nodeName: ydzs-master privilegedPods: false token: "" tokenGroups: - system:bootstrappers:kubeadm:default-node-token tokenTTL: 24h0m0s tokenUsages: - signing - authentication unifiedControlPlaneImage: "" 

將上面的imageRepository值更改為:gcr.azk8s.cn/google_containers,然后保存內容到文件 kubeadm-config.yaml 中(當然如果你的集群可以獲取到 grc.io 的鏡像可以不用更改)。

然后更新 kubeadm:

$ yum makecache fast && yum install -y kubeadm-1.11.0-0 kubectl-1.11.0-0 $ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} 

因為 kubeadm upgrade plan 命令執行過程中會去 dl.k8s.io 獲取版本信息,這個地址是需要科學方法才能訪問的,所以我們可以先將 kubeadm 更新到目標版本,然后就可以查看到目標版本升級的一些信息了。

執行 upgrade plan 命令查看是否可以升級:

$ kubeadm upgrade plan
[preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' I0518 18:50:12.844665 9676 feature_gate.go:230] feature gates: &{map[]} [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.10.0 [upgrade/versions] kubeadm version: v1.11.0 [upgrade/versions] WARNING: Couldn't fetch latest stable version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: dial tcp 35.201.71.162:443: i/o timeout [upgrade/versions] WARNING: Falling back to current kubeadm version as latest stable version [upgrade/versions] WARNING: Couldn't fetch latest version in the v1.10 series from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.10.txt": Get https://dl.k8s.io/release/stable-1.10.txt: dial tcp 35.201.71.162:443: i/o timeout Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.10.0 v1.11.0 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.10.0 v1.11.0 Controller Manager v1.10.0 v1.11.0 Scheduler v1.10.0 v1.11.0 Kube Proxy v1.10.0 v1.11.0 CoreDNS 1.1.3 Kube DNS 1.14.8 Etcd 3.1.12 3.2.18 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.11.0 _____________________________________________________________________ 

我們可以先使用 dry-run 命令查看升級信息:

$ kubeadm upgrade apply v1.11.0 --config kubeadm-config.yaml --dry-run

注意要通過--config指定上面保存的配置文件,該配置文件信息包含了上一個版本的集群信息以及修改搞得鏡像地址。

查看了上面的升級信息確認無誤后就可以執行升級操作了:

$ kubeadm upgrade apply v1.11.0 --config kubeadm-config.yaml
kubeadm upgrade apply v1.11.0 --config kubeadm-config.yaml
[preflight] Running pre-flight checks. I0518 18:57:29.134722 12284 feature_gate.go:230] feature gates: &{map[]} [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration options from a file: kubeadm-config.yaml I0518 18:57:29.179231 12284 feature_gate.go:230] feature gates: &{map[]} [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.11.0" [upgrade/versions] Cluster version: v1.10.0 [upgrade/versions] kubeadm version: v1.11.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.0"... Static pod: kube-apiserver-ydzs-master hash: 3abd7df4382a9b60f60819f84de40e11 Static pod: kube-controller-manager-ydzs-master hash: 1a0f3ccde96238d31012390b61109573 Static pod: kube-scheduler-ydzs-master hash: 2acb197d598c4730e3f5b159b241a81b 

隔一段時間看到如下信息就證明集群升級成功了:

...... [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. 

由於上面我們已經更新過 kubectl 了,現在我們用 kubectl 來查看下版本信息:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} 

可以看到現在 Server 端和 Client 端都已經是 v1.11.0 版本了,然后查看下 Pod 信息:

$ kubectl get pods -n kube-system
NAME                                             READY     STATUS    RESTARTS   AGE
authproxy-oauth2-proxy-798cff85fc-pc8x5          1/1       Running   0          34d
cert-manager-796fb45d79-wcrfp                    1/1       Running   2          34d
coredns-7f6746b7f-2cs2x                          1/1       Running   0          5m
coredns-7f6746b7f-clphf                          1/1       Running   0          5m
etcd-ydzs-master                                 1/1       Running   0          10m
kube-apiserver-ydzs-master                       1/1       Running   0          7m
kube-controller-manager-ydzs-master              1/1       Running   0          7m
kube-flannel-ds-amd64-jxzq9                      1/1       Running   8          64d
kube-flannel-ds-amd64-r56r9                      1/1       Running   3          64d
kube-flannel-ds-amd64-xw9fx                      1/1       Running   2          64d
kube-proxy-gqvdg                                 1/1       Running   0          3m
kube-proxy-sn7xb                                 1/1       Running   0          3m
kube-proxy-vbrr7                                 1/1       Running   0          2m
kube-scheduler-ydzs-master                       1/1       Running   0          6m
nginx-ingress-controller-587b4c68bf-vsqgm        1/1       Running   2          34d
nginx-ingress-default-backend-64fd9fd685-lmxhw   1/1       Running   1          34d
tiller-deploy-847cfb9744-5cvh8                   1/1       Running   0          4d

更新 kubelet

可以看到我們之前的 kube-dns 服務已經被 coredns 取代了,這是因為在 v1.11.0 版本后就默認使用 coredns 了,我們也可以訪問下集群中的服務看是否有影響,然后查看下集群的 Node 信息:

$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
ydzs-master   Ready     master    64d       v1.10.0
ydzs-node1    Ready     <none> 64d v1.10.0 ydzs-node2 Ready <none> 64d v1.10.0 

可以看到版本並沒有更新,這是因為節點上的 kubelet 還沒有更新的,我們可以通過 kubelet 查看下版本:

$ kubelet --version
Kubernetes v1.10.0

這個時候我們去手動更新下 kubelet:

$ yum install -y kubelet-1.11.0-0 # 安裝完成后查看下版本 $ kubelet --version Kubernetes v1.11.0 # 然后重啟 kubelet 服務 $ systemctl daemon-reload $ systemctl restart kubelet $ kubectl get nodes NAME STATUS ROLES AGE VERSION ydzs-master Ready master 64d v1.11.0 ydzs-node1 Ready <none> 64d v1.10.0 ydzs-node2 Ready <none> 64d v1.10.0 

注意事項:

  • 如果節點上 swap 沒有關掉重啟 kubelet 服務會報錯,所以最好是關掉 swap,執行命令:swapoff -a 即可。
  • 1.11.0 版本的 kubelet 默認使用的pod-infra-container-image鏡像名稱為:k8s.gcr.io/pause:3.1,所以最好先提前查看下集群節點上是否有這個鏡像,因為我們之前 1.10.0 版本的集群默認的名字為k8s.gcr.io/pause-amd64:3.1,所以如果節點上還是之前的 pause 鏡像的話,需要先重新打下鏡像 tag:
$ docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1

沒有的話可以提前下載到節點上也可以通過配置參數進行指定,在文件/var/lib/kubelet/kubeadm-flags.env中添加如下參數信息:

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --pod-infra-container-image=cnych/pause-amd64:3.1 

可以看到我們更新了 kubelet 的節點版本信息已經更新了,同樣的方式去把另外兩個節點 kubelet 更新即可。

另外需要注意的是最好在節點上的 kubelet 更新之前將節點設置為不可調度,更新完成后再設置回來,可以避免不必要的錯誤。

最后看下升級后的集群:

$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
ydzs-master   Ready     master    64d       v1.11.0
ydzs-node1    Ready     <none> 64d v1.11.0 ydzs-node2 Ready <none> 64d v1.11.0 

到這里我們的集群就升級成功了,我們可以用同樣的方法將集群升級到 v1.12.x、v1.13.x、v1.14.x 版本,而且升級過程中是不會影響到現有業務的。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM