1、RC(Replication Controller)副本控制器,Replication Controller的作用。
應用托管在kubernetes之后,kubernetes需要保證應用能夠持續運行,這是RC的工作內容,它會確保任何時間kubernetes中都有指定數量的Pod在運行。在此基礎上,RC還提供了一些更高級的特性,比如滾動升級,升級回滾等等。
通俗的理解就是,當將應用通過K8s運行起來之后,我們的k8s是需要保證容器一直處於持續運行,保證它的高可用,那么我們的RC就可以確保容器的高可用,RC的工作原理就是,RC是會一直監控我們的K8S容器,也就是說POD資源它的運行狀態,一旦發現這個Pod資源有異常了,那么我們的RC就會控制k8s在其他的Node節點上啟動一個新的Pod,以此來保證這個業務的高可用運行。RC除了保證Pod高可用之外,還提供了更高級的特性,比如滾動升級,升級回滾等等。
2、首先,查看你的k8s各個節點狀態是否正常運行,然后創建一個rc的目錄,用於存放RC(Replication Controller)的yaml配置文件。
1 [root@k8s-master ~]# kubectl get nods 2 the server doesn't have a resource type "nods" 3 [root@k8s-master ~]# kubectl get node 4 NAME STATUS AGE 5 k8s-master Ready 6d 6 k8s-node2 Ready 6d 7 k8s-node3 Ready 6d 8 [root@k8s-master ~]# kubectl get nodes 9 NAME STATUS AGE 10 k8s-master Ready 6d 11 k8s-node2 Ready 6d 12 k8s-node3 Ready 6d 13 [root@k8s-master ~]# clear 14 [root@k8s-master ~]# kubectl get componentstatus 15 NAME STATUS MESSAGE ERROR 16 scheduler Healthy ok 17 controller-manager Healthy ok 18 etcd-0 Healthy {"health":"true"} 19 [root@k8s-master ~]# clear 20 [root@k8s-master ~]# cd k8s/ 21 [root@k8s-master k8s]# ls 22 pod 23 [root@k8s-master k8s]# mkdir rc 24 [root@k8s-master k8s]# cd rc/ 25 [root@k8s-master rc]# ls 26 [root@k8s-master rc]# vim nginx_rc_yaml 27 [root@k8s-master rc]# kubectl create -f nginx_rc_yaml 28 replicationcontroller "myweb" created 29 [root@k8s-master rc]# kubectl get rc 30 NAME DESIRED CURRENT READY AGE 31 myweb 2 2 1 13s 32 [root@k8s-master rc]#
創建nginx_rc_yaml配置文件,配置內容,如下所示。
1 # 聲明api的版本。 2 apiVersion: v1 3 # kind代表資源的類型,資源是ReplicationController。 4 kind: ReplicationController 5 # 資源叫什么名字,是在其屬性metadata里面的。 6 metadata: 7 # 第一個屬性name的值是myweb,即ReplicationController的名字就叫做myweb。 8 name: myweb 9 # spec是詳細,詳細里面定義了一個容器。 10 spec: 11 # 聲明副本數量是2,代表了RC會啟動兩個相同的Pod。 12 replicas: 2 13 # 選擇器。 14 selector: 15 app: myweb 16 # Pod的啟動模板,和Pod的yaml配置信息基本差不多的,幾乎一樣,但是這里沒有名稱,是因為兩個Pod名稱不能完全一樣的。 17 # 沒有指定名稱,RC會隨機生成一個名稱。 18 template: 19 # 資源叫什么名字,是在其屬性metadata里面的。但是這里讓RC隨機生成指定數量的名稱。 20 metadata: 21 # 給Pod貼上了一個標簽,標簽是app: web,標簽是有一定的作用的。 22 labels: 23 app: myweb 24 # spec是詳細,詳細里面定義了一個容器。 25 spec: 26 # 定義一個容器,可以聲明多個容器的。 27 containers: 28 # 容器的名稱叫做myweb 29 - name: myweb 30 # 使用了什么鏡像,可以使用官方公有的,也可以使用私有的。 31 image: 192.168.110.133:5000/nginx:1.13 32 # ports定義容器的端口。 33 ports: 34 # 容器的端口是80,如果容器有多個端口,可以在后面接着寫一行即可。 35 - containerPort: 80
配置,如下所示:
如果如何控制yaml的格式,可以使用notepad++的yaml語言格式,或者在線yaml解析,或者idea的yaml配置文件,idea的yaml配置文件也推薦使用哦。
創建好RC(Replication Controller)之后,可以進行檢查。可以看到RC創建了兩個Pod,可以查看一下Pod的數量和狀態。
1 [root@k8s-master rc]# kubectl get rc 2 NAME DESIRED CURRENT READY AGE 3 myweb 2 2 1 7m 4 [root@k8s-master rc]# kubectl get pods 5 NAME READY STATUS RESTARTS AGE 6 myweb-0hqc5 0/1 ImagePullBackOff 0 8m 7 myweb-2np4k 1/1 Running 0 8m 8 nginx 1/1 Running 1 3d 9 test1 0/1 ImagePullBackOff 0 2d 10 test2 2/2 Running 1 2d 11 test4 1/1 Running 0 2d 12 [root@k8s-master rc]#
很明顯,我這里創建的兩個Pod,有一個啟動失敗了。此時,我想將失敗的Pod刪除掉,但是我刪除了一個,RC又幫助你啟動了一個,嗯,真的是高可用啊,然后我將RC刪除掉,這兩個Pod就隨着被刪除掉了。
1 [root@k8s-master ~]# kubectl get rc 2 NAME DESIRED CURRENT READY AGE 3 myweb 2 2 1 17m 4 [root@k8s-master ~]# kubectl get pod -o wide 5 NAME READY STATUS RESTARTS AGE IP NODE 6 myweb-8cp7h 0/1 ImagePullBackOff 0 5m 172.16.85.3 k8s-master 7 myweb-qcgjl 1/1 Running 1 14m 172.16.5.2 k8s-node2 8 nginx 1/1 Running 2 3d 172.16.38.3 k8s-node3 9 test1 0/1 ImagePullBackOff 0 2d 172.16.85.2 k8s-master 10 test2 2/2 Running 3 2d 172.16.38.2 k8s-node3 11 test4 1/1 Running 1 2d 172.16.5.3 k8s-node2 12 [root@k8s-master ~]# kubectl delete rc myweb 13 replicationcontroller "myweb" deleted 14 [root@k8s-master ~]# kubectl get rc 15 No resources found. 16 [root@k8s-master ~]#
這里我將沒有用的測試Pod都刪除掉,因為我筆記本只有8g內存,可能內存不夠用了。搞了一個小時,不是內存的問題,是之前部署k8s的時候,測試nginx的時候將nginx拼錯了,尷尬。
1 [root@k8s-node2 ~]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE 3 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 4 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB 5 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB 6 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 7 docker.io/tianyebj/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 8 [root@k8s-node2 ~]#
而我的RC的yaml的配置文件,如下所示:
1 Version: v1 2 kind: ReplicationController 3 metadata: 4 name: myweb 5 spec: 6 replicas: 2 7 selector: 8 app: myweb 9 template: 10 metadata: 11 labels: 12 app: myweb 13 spec: 14 containers: 15 - name: myweb 16 image: 192.168.110.133:5000/nginx:1.13 17 # imagePullPolicy: IfNotPresent 18 ports: 19 - containerPort: 80
但是主節點的docker鏡像是192.168.110.133:5000/ngnix,造成了每次創建RC,在主節點的Pod都無法啟動,尷尬,還排查了這么久。真打臉。
1 [root@k8s-master ~]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE 3 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 4 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 5 192.168.110.133:5000/ngnix 1.13 ae513a47849c 2 years ago 109 MB 6 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 7 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 8 [root@k8s-master ~]#
報錯信息,也貼一下吧,方便自己以后使用,如下所示:
1 [root@k8s-master ~]# kubectl describe pod myweb-qwgsf 2 Name: myweb-qwgsf 3 Namespace: default 4 Node: k8s-master/192.168.110.133 5 Start Time: Thu, 11 Jun 2020 17:21:45 +0800 6 Labels: app=myweb 7 Status: Pending 8 IP: 172.16.85.2 9 Controllers: ReplicationController/myweb 10 Containers: 11 myweb: 12 Container ID: 13 Image: 192.168.110.133:5000/nginx:1.13 14 Image ID: 15 Port: 80/TCP 16 State: Waiting 17 Reason: ErrImagePull 18 Ready: False 19 Restart Count: 0 20 Volume Mounts: <none> 21 Environment Variables: <none> 22 Conditions: 23 Type Status 24 Initialized True 25 Ready False 26 PodScheduled True 27 No volumes. 28 QoS Class: BestEffort 29 Tolerations: <none> 30 Events: 31 FirstSeen LastSeen Count From SubObjectPath Type Reason Message 32 --------- -------- ----- ---- ------------- -------- ------ ------- 33 12m 12m 1 {kubelet k8s-master} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. 34 12m 12m 1 {default-scheduler } Normal Scheduled Successfully assigned myweb-qwgsf to k8s-master 35 12m 6m 6 {kubelet k8s-master} spec.containers{myweb} Normal Pulling pulling image "192.168.110.133:5000/nginx:1.13" 36 12m 6m 6 {kubelet k8s-master} spec.containers{myweb} Warning Failed Failed to pull image "192.168.110.133:5000/nginx:1.13": Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused 37 12m 6m 6 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myweb" with ErrImagePull: "Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused" 38 39 12m 4m 31 {kubelet k8s-master} spec.containers{myweb} Normal BackOff Back-off pulling image "192.168.110.133:5000/nginx:1.13" 40 12m 4m 31 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myweb" with ImagePullBackOff: "Back-off pulling image \"192.168.110.133:5000/nginx:1.13\"" 41 42 32s 32s 1 {kubelet k8s-master} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. 43 31s 20s 2 {kubelet k8s-master} spec.containers{myweb} Normal Pulling pulling image "192.168.110.133:5000/nginx:1.13" 44 31s 20s 2 {kubelet k8s-master} spec.containers{myweb} Warning Failed Failed to pull image "192.168.110.133:5000/nginx:1.13": Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused 45 31s 20s 2 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myweb" with ErrImagePull: "Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused" 46 47 30s 8s 2 {kubelet k8s-master} spec.containers{myweb} Normal BackOff Back-off pulling image "192.168.110.133:5000/nginx:1.13" 48 30s 8s 2 {kubelet k8s-master} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myweb" with ImagePullBackOff: "Back-off pulling image \"192.168.110.133:5000/nginx:1.13\""
此處,將主節點的Docker鏡像刪除掉。
1 [root@k8s-master ~]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE 3 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 4 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 5 192.168.110.133:5000/ngnix 1.13 ae513a47849c 2 years ago 109 MB 6 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 7 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 8 [root@k8s-master ~]# docker ps 9 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 10 [root@k8s-master ~]# docker ps -a 11 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 12 a27987d97039 registry "/entrypoint.sh /e..." 5 days ago Exited (2) 3 days ago registry 13 ee95778bd5d9 busybox "sh" 6 days ago Exited (127) 6 days ago friendly_payne 14 6d459781a3e5 busybox "sh" 6 days ago Exited (137) 5 days ago gracious_nightingale 15 [root@k8s-master ~]# docker rmi -f ae513a47849c 16 Untagged: 192.168.110.133:5000/ngnix:1.13 17 Untagged: 192.168.110.133:5000/ngnix@sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 18 Deleted: sha256:ae513a47849c895a155ddfb868d6ba247f60240ec8495482eca74c4a2c13a881 19 Deleted: sha256:160a8bd939a9421818f499ba4fbfaca3dd5c86ad7a6b97b6889149fd39bd91dd 20 Deleted: sha256:f246685cc80c2faa655ba1ec9f0a35d44e52b6f83863dc16f46c5bca149bfefc 21 Deleted: sha256:d626a8ad97a1f9c1f2c4db3814751ada64f60aed927764a3f994fcd88363b659 22 [root@k8s-master ~]# docker images 23 REPOSITORY TAG IMAGE ID CREATED SIZE 24 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 25 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 26 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 27 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 28 [root@k8s-master ~]# docker images 29 REPOSITORY TAG IMAGE ID CREATED SIZE 30 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 31 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 32 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 33 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 34 [root@k8s-master ~]#
此時,將三台節點重啟了,或者重啟服務,我這里直接重啟了三台機器,其所有服務全部重啟。
1 [root@k8s-master ~]# free -h 2 total used free shared buff/cache available 3 Mem: 2.2G 880M 453M 12M 953M 1.1G 4 Swap: 2.0G 0B 2.0G 5 [root@k8s-master ~]# kubectl get node 6 NAME STATUS AGE 7 k8s-master Ready 6d 8 k8s-node2 Ready 6d 9 k8s-node3 Ready 6d 10 [root@k8s-master ~]# kubectl get componentstatus 11 NAME STATUS MESSAGE ERROR 12 scheduler Healthy ok 13 controller-manager Healthy ok 14 etcd-0 Healthy {"health":"true"} 15 [root@k8s-master ~]# kubectl get rc 16 NAME DESIRED CURRENT READY AGE 17 myweb 2 2 2 16m 18 [root@k8s-master ~]# docker images 19 REPOSITORY TAG IMAGE ID CREATED SIZE 20 docker.io/nginx latest 2622e6cca7eb 41 hours ago 132 MB 21 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 22 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 23 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB 24 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB 25 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 26 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 27 [root@k8s-master ~]# kubectl get node -o wide 28 NAME STATUS AGE EXTERNAL-IP 29 k8s-master Ready 6d <none> 30 k8s-node2 Ready 6d <none> 31 k8s-node3 Ready 6d <none> 32 [root@k8s-master ~]# kubectl get pod -o wide 33 NAME READY STATUS RESTARTS AGE IP NODE 34 myweb-2h8b1 1/1 Running 1 17m 172.16.85.2 k8s-master 35 myweb-lfkmp 1/1 Running 1 17m 172.16.5.2 k8s-node2 36 test4 1/1 Running 1 13m 172.16.38.2 k8s-node3 37 [root@k8s-master ~]#
RC(Replication Controller)會始終保持Pod的數量為2,可以自己刪除一個Pod,k8s的RC(Replication Controller)會里面幫助你啟動一個新的Pod,RC(Replication Controller)會時刻監控Pod的狀態,少了就啟動,多了就進行刪除,數量和配置文件yaml的數量一致。
3、RC(Replication Controller)如何與Pod進行關聯呢?
答:使用到的標簽Label(標簽選擇器)。在nginx_rc.yaml配置文件中,定義了RC的選擇器是通過標簽app:myweb來選擇的,每一個Pod在運行的時候會自動加上一個標簽叫做app:myweb,這樣的話,RC會自動根據標簽來選擇我們的Pod。
可以通過命令kubectl describe pod myweb-2h8b1來查看標簽。
1 [root@k8s-master rc]# kubectl describe pod myweb-2h8b1 2 Name: myweb-2h8b1 3 Namespace: default 4 Node: k8s-master/192.168.110.133 5 Start Time: Thu, 11 Jun 2020 17:51:06 +0800 6 Labels: app=myweb 7 Status: Running 8 IP: 172.16.85.2 9 Controllers: ReplicationController/myweb 10 Containers: 11 myweb: 12 Container ID: docker://27a9e6dfb65be540bb50c98d820a5b773c0ed01d09d2350baf6027cdf9e22257 13 Image: 192.168.110.133:5000/nginx:1.13 14 Image ID: docker-pullable://docker.io/nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35 15 Port: 80/TCP 16 State: Running 17 Started: Thu, 11 Jun 2020 18:06:34 +0800 18 Last State: Terminated 19 Reason: Completed 20 Exit Code: 0 21 Started: Thu, 11 Jun 2020 18:01:41 +0800 22 Finished: Thu, 11 Jun 2020 18:05:42 +0800 23 Ready: True 24 Restart Count: 1 25 Volume Mounts: <none> 26 Environment Variables: <none> 27 Conditions: 28 Type Status 29 Initialized True 30 Ready True 31 PodScheduled True 32 No volumes. 33 QoS Class: BestEffort 34 Tolerations: <none> 35 No events. 36 [root@k8s-master rc]#
通過查看RC的標簽。
1 [root@k8s-master rc]# kubectl get rc -o wide 2 NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR 3 myweb 2 2 2 3h myweb 192.168.110.133:5000/nginx:1.13 app=myweb 4 [root@k8s-master rc]#
RC(Replication Controller)是通過標簽(標簽選擇器)來選擇Pod,通過標簽來決定這個Pod是歸我來管理的。
通過修改創建好的Pod可以測試,如果多於指定數量的Pod數量,就會被刪除掉,注意,刪除掉的Pod是最年輕的那個Pod。kubectl edit pod test4命令可以修改創建好的Pod。
1 [root@k8s-master rc]# kubectl get all 2 NAME DESIRED CURRENT READY AGE 3 rc/myweb 2 2 2 3h 4 5 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 6 svc/kubernetes 10.254.0.1 <none> 443/TCP 6d 7 8 NAME READY STATUS RESTARTS AGE 9 po/myweb-2h8b1 1/1 Running 1 3h 10 po/myweb-lfkmp 1/1 Running 1 3h 11 po/test4 1/1 Running 1 3h 12 [root@k8s-master rc]# kubectl edit pod test4 13 pod "test4" edited 14 [root@k8s-master rc]# kubectl get all 15 NAME DESIRED CURRENT READY AGE 16 rc/myweb 2 2 2 3h 17 18 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 19 svc/kubernetes 10.254.0.1 <none> 443/TCP 6d 20 21 NAME READY STATUS RESTARTS AGE 22 po/myweb-lfkmp 1/1 Running 1 3h 23 po/test4 1/1 Running 1 3h 24 [root@k8s-master rc]#
4、RC(Replication Controller)的滾動升級。
答:滾動升級是一種平滑過渡的升級方式,通過逐步替換的策略,保證整體系統的穩定,在初始升級的時候就可以及時發現,調整問題,以保證問題影響度不好擴大。Kubernetes中滾動升級的命令如下所示:
首先,這里將配置文件nginx_rc.yaml進行拷貝,然后修改配置文件nginx_rc2.yaml,將myweb替換為myweb2。
1 [root@k8s-master rc]# cp nginx_rc.yaml nginx_rc2.yaml 2 [root@k8s-master rc]# ls 3 nginx_rc2.yaml nginx_rc.yaml 4 [root@k8s-master rc]# vim nginx_rc2.yaml 5 [root@k8s-master rc]#
具體配置,如下所示:
將myweb替換為myweb2,替換過后,然后將鏡像版本修改為latest版本,如下所示:
將Nginx的latest版本鏡像拉取下來docker pull docker.io/nginx:latest。然后將鏡像上傳到私有倉庫里面,方便下載。
這里需要注意的是,我之前在配置Docker鏡像加速的時候,在三台機器的vim /etc/sysconfig/docker。
1 [root@k8s-node3 ~]# vim /etc/sysconfig/docker
我在這個配置文件里面加的鏡像加速和配置私有倉庫地址。貌似並不是很好使的。
下面,在三台機器的上面,進行如下配置,將Docker鏡像加速和私有倉庫配置到下面這里。
1 [root@k8s-node3 ~]# docker pull docker.io/nginx:1.15 2 Trying to pull repository docker.io/library/nginx ... 3 Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
修改配置文件,在三台機器上面都配置如下所示配置,[root@k8s-node3 ~]# vim /etc/docker/daemon.json
1 { "insecure-registries":["192.168.110.133:5000"] ,"registry-mirrors":["https://docker.mirrors.ustc.edu.cn"]}
然后重啟三台機器的Docker服務systemctl restart docker。如果實在下載不下來,需要自己從網上找個https://hub.docker.com/
1 [root@k8s-master ~]# docker pull docker.io/nginx:1.15 2 Trying to pull repository docker.io/library/nginx ... 3 sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68: Pulling from docker.io/library/nginx 4 743f2d6c1f65: Pull complete 5 6bfc4ec4420a: Pull complete 6 688a776db95f: Pull complete 7 Digest: sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68 8 Status: Downloaded newer image for docker.io/nginx:1.15 9 [root@k8s-master ~]#
然后將ngnix1.15上傳到私有倉庫里面。可以使用docker images命令查看是否已經上傳到私有倉庫。
1 [root@k8s-master rc]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE 3 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 4 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 5 docker.io/nginx 1.15 53f3fd8007f7 13 months ago 109 MB 6 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB 7 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB 8 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 9 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 10 [root@k8s-master rc]# docker tag docker.io/nginx:1.15 192.168.110.133:5000/nginx:1.15 11 [root@k8s-master rc]# docker push 192.168.110.133:5000/nginx:1.15 12 The push refers to a repository [192.168.110.133:5000/nginx] 13 Put http://192.168.110.133:5000/v1/repositories/nginx/: dial tcp 192.168.110.133:5000: connect: connection refused 14 [root@k8s-master rc]# docker images 15 REPOSITORY TAG IMAGE ID CREATED SIZE 16 docker.io/busybox latest 1c35c4412082 8 days ago 1.22 MB 17 docker.io/registry latest 708bc6af7e5e 4 months ago 25.8 MB 18 docker.io/nginx 1.15 53f3fd8007f7 13 months ago 109 MB 19 192.168.110.133:5000/nginx 1.15 53f3fd8007f7 13 months ago 109 MB 20 docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB 21 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB 22 registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB 23 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 24 [root@k8s-master rc]#
滾動升級是一種平滑過渡的升級方式,通過逐步替換的策略,保證整體系統的穩定,在初始升級的時候就可以及時發現,調整問題,以保證問題影響度不好擴大。Kubernetes中滾動升級的命令如下所示:
這里需要注意的是,想要看看Docker私有倉庫是否有你想要的鏡像,可以使用如下所示查看:
1 [root@k8s-node3 docker]# docker images 2 REPOSITORY TAG IMAGE ID CREATED SIZE 3 docker.io/busybox latest 1c35c4412082 9 days ago 1.22 MB 4 192.168.110.133:5000/nginx 1.15 53f3fd8007f7 13 months ago 109 MB 5 docker.io/nginx 1.15 53f3fd8007f7 13 months ago 109 MB 6 192.168.110.133:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB 7 docker.io/tianyebj/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 8 192.168.110.133:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB 9 [root@k8s-node3 docker]#
還有關於使用鏡像加速和私有倉庫的配置,這里需要說明的是,我的Docker的版本是1.13.1。
1 [root@k8s-node3 ~]# docker version 2 Client: 3 Version: 1.13.1 4 API version: 1.26 5 Package version: docker-1.13.1-161.git64e9980.el7_8.x86_64 6 Go version: go1.10.3 7 Git commit: 64e9980/1.13.1 8 Built: Tue Apr 28 14:43:01 2020 9 OS/Arch: linux/amd64 10 11 Server: 12 Version: 1.13.1 13 API version: 1.26 (minimum version 1.12) 14 Package version: docker-1.13.1-161.git64e9980.el7_8.x86_64 15 Go version: go1.10.3 16 Git commit: 64e9980/1.13.1 17 Built: Tue Apr 28 14:43:01 2020 18 OS/Arch: linux/amd64 19 Experimental: false 20 [root@k8s-node3 ~]#
關於鏡像加速和私有倉庫的配置,我此時由於不需要從網上下載所需的軟件,這里將三台機器的/etc/docker/daemon.json配置文件全部后面加上了_bak,這里不需要使用它們了。
1 [root@k8s-node3 ~]# cat /etc/docker/daemon.json_bak 2 { "insecure-registries":["192.168.110.133:5000"]} 3 [root@k8s-node3 ~]# cd /etc/docker/ 4 [root@k8s-node3 docker]# ls 5 certs.d daemon.json_20200612 daemon.json_bak key.json seccomp.json 6 [root@k8s-node3 docker]#
關於鏡像加速和私有倉庫的配置,如果配置不好,真的難為si你的。所以這里說了很多次。此時,三台機器的鏡像加速和私有中心在這個里面還進行了配置。特此說明。
1 [root@k8s-master docker]# cat /etc/sysconfig/docker 2 # /etc/sysconfig/docker 3 4 # Modify these options if you want to change the way the docker daemon runs 5 # OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' 6 # 信任私有倉庫,鏡像加速 7 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false 8 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000' 9 10 11 12 if [ -z "${DOCKER_CERT_PATH}" ]; then 13 DOCKER_CERT_PATH=/etc/docker 14 fi 15 16 # Do not add registries in this file anymore. Use /etc/containers/registries.conf 17 # instead. For more information reference the registries.conf(5) man page. 18 19 # Location used for temporary files, such as those created by 20 # docker load and build operations. Default is /var/lib/docker/tmp 21 # Can be overriden by setting the following environment variable. 22 # DOCKER_TMPDIR=/var/tmp 23 24 # Controls the /etc/cron.daily/docker-logrotate cron job status. 25 # To disable, uncomment the line below. 26 # LOGROTATE=false 27 28 # docker-latest daemon can be used by starting the docker-latest unitfile. 29 # To use docker-latest client, uncomment below lines 30 #DOCKERBINARY=/usr/bin/docker-latest 31 #DOCKERDBINARY=/usr/bin/dockerd-latest 32 #DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest 33 #DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest 34 [root@k8s-master docker]#
饒了一大圈,此時進行滾動升級,滾動升級是一種平滑過渡的升級方式,通過逐步替換的策略,保證整體系統的穩定,在初始升級的時候就可以及時發現,調整問題,以保證問題影響度不好擴大。Kubernetes中滾動升級的命令如下所示:
1 [root@k8s-master rc]# kubectl rolling-update myweb -f nginx_rc2.yaml --update-period=10s 2 Created myweb2 3 Scaling up myweb2 from 0 to 2, scaling down myweb from 2 to 0 (keep 2 pods available, don't exceed 3 pods) 4 Scaling myweb2 up to 1 5 Scaling myweb down to 1 6 Scaling myweb2 up to 2 7 Scaling myweb down to 0 8 Update succeeded. Deleting myweb 9 replicationcontroller "myweb" rolling updated to "myweb2" 10 [root@k8s-master rc]#
創建一個myweb2的RC,將myweb2的RC的Pod數量由0調整為2,把myweb的RC的Pod數量由2調整為0。當myweb2存活了30秒以上就會刪掉一個myweb,nginx也是myweb的容器,刪除也是比較慢的。
升級開始后,首先依據提供的定義文件創建V2版本的RC,然后每隔10s(--update-period=10s)逐步的增加V2版本的Pod副本數,逐步減少V1版本Pod的副本數。升級完成之后,刪除V1版本的RC,保留V2版本的RC,以及實現滾動升級。
1 [root@k8s-master ~]# kubectl get pod -o wide 2 NAME READY STATUS RESTARTS AGE IP NODE 3 myweb2-f5400 1/1 Running 0 15s 172.16.38.3 k8s-node3 4 myweb2-mg9sk 1/1 Running 0 26s 172.16.85.2 k8s-master 5 [root@k8s-master ~]#
升級之后還可以進行回滾,如下所示:
1 [root@k8s-master rc]# kubectl rolling-update myweb2 -f nginx_rc.yaml --update-period=10s 2 Created myweb 3 Scaling up myweb from 0 to 2, scaling down myweb2 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) 4 Scaling myweb up to 1 5 Scaling myweb2 down to 1 6 Scaling myweb up to 2 7 Scaling myweb2 down to 0 8 Update succeeded. Deleting myweb2 9 replicationcontroller "myweb2" rolling updated to "myweb" 10 [root@k8s-master rc]#
升級過程中,發生了錯誤中途退出時候,可以選擇繼續升級。Kubernetes能夠智能的判斷升級中斷之前的狀態,然后緊接着繼續執行升級。當然,也可以進行退出,命令如下所示:
1 [root@k8s-master rc]# kubectl rolling-update myweb myweb2 --update-period=10s --rollback 2 Setting "myweb" replicas to 2 3 Continuing update with existing controller myweb. 4 Scaling up myweb from 2 to 2, scaling down myweb2 from 1 to 0 (keep 2 pods available, don't exceed 3 pods) 5 Scaling myweb2 down to 0 6 Update succeeded. Deleting myweb2 7 replicationcontroller "myweb" rolling updated to "myweb2" 8 [root@k8s-master rc]#