K8S--鏡像加速--部署拉取鏡像--集群刪除新增節點--鏡像導入導出
末尾有貼圖
以K8S部署初始時下載鏡像為例,引出上面一些問題的解決方法等,首先羅列服務器信息
IP
|
host
|
role
|
192.168.101.86
|
k8s1.dev.lczy.com
|
master
|
192.168.100.17
|
node
|
好久之前打算整理一下,k8s部署時鏡像拉取問題,現在正好需要重新部署K8S環境,所以前來寫一份文檔,分享給大家。
1.18版本--需要下載的鏡像列表:
api等服務
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
網絡--可選
quay.io/coreos/flannel:v0.12.0-s390x
quay.io/coreos/flannel:v0.12.0-ppc64le
quay.io/coreos/flannel:v0.12.0-arm64
quay.io/coreos/flannel:v0.12.0-arm
quay.io/coreos/flannel:v0.12.0-amd64
網絡下載境外鏡像,網速受到限制,很慢。
解決方法:
1.上傳阿里雲鏡像倉庫,開放狀態,大家都可以拉取,比如我上傳的共享鏡像:
registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0
2.下載到本地,移動存儲,使用時上傳服務器,docker加載鏡像
3.本地搭建容器鏡像倉庫,將鏡像上傳私有鏡像倉庫,使用時直接內網拉取
優劣:
方法1:永久保存,雲值得信賴,第一次上傳慢,后續使用下載速度一般,需要下載后進行tag
方法2:存到本地易丟失,速度快,不需要重新tag
方法3:兩種選擇,修改image下載源指向harbor私用倉庫或者手動下載,手動tag。比阿里雲下載快,環境需要自己維護
首先:方法一:
阿里雲鏡像服務:
https://www.aliyun.com/product/acr
你們可以根據上述的網址,准備自己的私有鏡像上傳。這里我提供了,自己正路的阿里雲的鏡像,
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/coredns:1.6.7
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/pause:3.2
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-controller-manager:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-scheduler:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-apiserver:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-proxy:v1.18.0
重新tag
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
當然還有網絡插件等
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-s390x
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-ppc64le
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm64
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-amd64
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
然后初始化就行了
圖片比較模糊,湊合看一看吧!(o( ̄┰ ̄*)ゞ)


其次:方法二:
若是只想備份images,使用save、load即可
若是在啟動容器后,容器內容有變化,需要備份,則使用export、import
save
docker save -o nginx.tar nginx:latest
或
docker save > nginx.tar nginx:latest
其中-o和>表示輸出到文件,nginx.tar為目標文件,nginx:latest是源鏡像名(name:tag)
load
docker load -i nginx.tar
或
docker load < nginx.tar
其中-i和<表示從文件輸入。會成功導入鏡像及相關元數據,包括tag信息
------------------------------------------------------
第一步————獲取images列表
[root@k8s1 ~]#
docker images |grep k8s.gcr.io|awk '{print $1":"$2}'
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
第二步————獲取導出的鏡像名稱和鏡像文件名稱
[root@k8s1 ~]#
for i in `docker images |grep k8s.gcr.io|awk '{print $1":"$2}'`;do File_Name=`echo $i|awk -F '/' '{print $2}'|awk -F ':' '{print $1}'`;echo "$i $File_Name";done
k8s.gcr.io/kube-proxy:v1.18.0 kube-proxy
k8s.gcr.io/kube-apiserver:v1.18.0 kube-apiserver
k8s.gcr.io/kube-controller-manager:v1.18.0 kube-controller-manager
k8s.gcr.io/kube-scheduler:v1.18.0 kube-scheduler
k8s.gcr.io/pause:3.2 pause
k8s.gcr.io/coredns:1.6.7 coredns
k8s.gcr.io/etcd:3.4.3-0 etcd
第三步——實現導出鏡像
[root@k8s1 k8s]#
for i in `docker images |grep k8s.gcr.io|awk '{print $1":"$2}'`;do File_Name=`echo $i|awk -F '/' '{print $2}'|awk -F ':' '{print $1}'`;docker save > ${File_Name}.tar $i ;done
[root@k8s1 k8s]# ll
total 867676
-rw-r--r-- 1 root root 43932160 Apr 26 14:54 coredns.tar
-rw-r--r-- 1 root root 290010624 Apr 26 14:54 etcd.tar
-rw-r--r-- 1 root root 174525440 Apr 26 14:53 kube-apiserver.tar
-rw-r--r-- 1 root root 163929088 Apr 26 14:54 kube-controller-manager.tar
-rw-r--r-- 1 root root 118543360 Apr 26 14:53 kube-proxy.tar
-rw-r--r-- 1 root root 96836608 Apr 26 14:53 kube-scheduler.tar
-rw-r--r-- 1 root root 692736 Apr 26 14:54 pause.tar
第四步——實現導入鏡像
for i in `ls /opt/k8s/*.tar`;do docker load < $i;done
耐心等待,選擇一個目錄,再目錄下執行。比如:mkdir k8s_images;cd k8s_images;
最后:方法三:
1.環境准備
1.1 docker-簡單介紹
下載安裝docker服務:
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://******.mirror.aliyuncs.com"],
"insecure-registry": [ "hub1.lczy.com" ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
解釋:
https://******.mirror.aliyuncs.com——————阿里雲加速
"hub1.lczy.com"————私倉,http請求,也可以使用 "https://hub1.lczy.com"
注意版本,1.18D的K8S集群適合最新的docker版本為19.03
1.2 鏡像下載
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/coredns:1.6.7
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/pause:3.2
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-controller-manager:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-scheduler:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-apiserver:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/kube-proxy:v1.18.0
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-s390x
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-ppc64le
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm64
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm
docker pull registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-amd64
2.構建harbor私倉
2.1 私倉登錄
docker login hub1.lczy.com
admin
Harbor12345
https:訪問證書問題:
echo -n | openssl s_client -showcerts -connect hub1.lczy.com:443 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >> /etc/ssl/certs/ca-certificates.crt
2.2 上傳私倉
本地重新tag然后上傳 ,查看已下載的鏡像
[root@k8s1 ~]# docker images |grep 'registry.cn-beijing.aliyuncs.com'|awk '{print $1":"$2}'
registry.cn-beijing.aliyuncs.com/yunweijia/kube-proxy:v1.18.0
registry.cn-beijing.aliyuncs.com/yunweijia/kube-scheduler:v1.18.0
registry.cn-beijing.aliyuncs.com/yunweijia/kube-apiserver:v1.18.0
registry.cn-beijing.aliyuncs.com/yunweijia/kube-controller-manager:v1.18.0
registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-s390x
registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-ppc64le
registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm64
registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm
registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-amd64
registry.cn-beijing.aliyuncs.com/yunweijia/pause:3.2
registry.cn-beijing.aliyuncs.com/yunweijia/coredns:1.6.7
registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0
打印 tag 指令
[root@k8s1 ~]# for i in `docker images |grep 'beijing'|awk '{print $1":"$2}'`;do kb=`echo $i|awk -F '/' '{print $3}'`;echo "docker tag $i hub1.lczy.com/k8s/$kb";done
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-proxy:v1.18.0 hub1.lczy.com/k8s/kube-proxy:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-controller-manager:v1.18.0 hub1.lczy.com/k8s/kube-controller-manager:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-scheduler:v1.18.0 hub1.lczy.com/k8s/kube-scheduler:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/kube-apiserver:v1.18.0 hub1.lczy.com/k8s/kube-apiserver:v1.18.0
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-s390x hub1.lczy.com/k8s/flannel:v0.12.0-s390x
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-ppc64le hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm64 hub1.lczy.com/k8s/flannel:v0.12.0-arm64
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-arm hub1.lczy.com/k8s/flannel:v0.12.0-arm
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/flannel:v0.12.0-amd64 hub1.lczy.com/k8s/flannel:v0.12.0-amd64
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/pause:3.2 hub1.lczy.com/k8s/pause:3.2
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/coredns:1.6.7 hub1.lczy.com/k8s/coredns:1.6.7
docker tag registry.cn-beijing.aliyuncs.com/yunweijia/etcd:3.4.3-0 hub1.lczy.com/k8s/etcd:3.4.3-0
打印上傳指令
[root@k8s1 ~]# for i in `docker images |grep 'hub1'|awk '{print $1":"$2}'`;do echo "docker push $i ";done;
docker push hub1.lczy.com/k8s/kube-proxy:v1.18.0
docker push hub1.lczy.com/k8s/kube-controller-manager:v1.18.0
docker push hub1.lczy.com/k8s/kube-scheduler:v1.18.0
docker push hub1.lczy.com/k8s/kube-apiserver:v1.18.0
docker push hub1.lczy.com/k8s/flannel:v0.12.0-s390x
docker push hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
docker push hub1.lczy.com/k8s/flannel:v0.12.0-arm64
docker push hub1.lczy.com/k8s/flannel:v0.12.0-arm
docker push hub1.lczy.com/k8s/flannel:v0.12.0-amd64
docker push hub1.lczy.com/k8s/pause:3.2
docker push hub1.lczy.com/k8s/coredns:1.6.7
docker push hub1.lczy.com/k8s/etcd:3.4.3-0
2.3 簡要介紹私倉的構建
docker-compose:
curl -Lhttps://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
Harbor 官方地址:https://github.com/vmware/harbor/releases
tar xvf harbor-offline-installer-.tgz
https://github.com/vmware/harbor/releases/download/v1.2.0/harbor-offline-installer-v1.2.0.tgz
2、配置harbor.cfg
a、必選參數
hostname:目標的主機名或者完全限定域名
ui_url_protocol:http或https。默認為http
db_password:用於db_auth的MySQL數據庫的根密碼。更改此密碼進行任何生產用途
max_job_workers:(默認值為3)作業服務中的復制工作人員的最大數量。對於每個映像復制作業, 工作人員將存儲庫的所有標簽同步到遠程目標。增加此數字允許系統中更多的並發復制作業。但是,由於每個工 作人員都會消耗一定數量的網絡/ CPU/IO資源,請根據主機的硬件資源,仔細選擇該屬性的值
customize_crt:(on或off。默認為on)當此屬性打開時,prepare腳本將為注冊表的令牌的生成/驗證創建私鑰和根證書
ssl_cert:SSL證書的路徑,僅當協議設置為https時才應用
ssl_cert_key:SSL密鑰的路徑,僅當協議設置為https時才應用
secretkey_path:用於在復制策略中加密或解密遠程注冊表的密碼的密鑰路徑
3、創建 https 證書以及配置相關目錄權限
openssl genrsa -des3 -out server.key 2048
opensslreq -new-key server.key -out server.csr
cp server.key server.key.org
opensslrsa -in server.key.org -out server.key
openssl x509 -req -days 365 -in server.csr-signkey server.key -out server.crt
mkdir /data/cert
chmod -R 777 /data/cert
4、運行腳本進行安裝
./install.sh
5、訪問測試 https://hub1.lczy.com
管理員用戶名/密碼為admin /Harbor12345
6、https 與http
3.使用私倉
3.1 直接指定代理倉庫
kubeadm init --image-repository=registry.aliyuncs.com/google_containers
3.2 手動拉取,tag
——————————拉取————————————
docker pull hub1.lczy.com/k8s/kube-proxy:v1.18.0
docker pull hub1.lczy.com/k8s/kube-controller-manager:v1.18.0
docker pull hub1.lczy.com/k8s/kube-scheduler:v1.18.0
docker pull hub1.lczy.com/k8s/kube-apiserver:v1.18.0
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-s390x
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-arm64
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-arm
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-amd64
docker pull hub1.lczy.com/k8s/pause:3.2
docker pull hub1.lczy.com/k8s/coredns:1.6.7
docker pull hub1.lczy.com/k8s/etcd:3.4.3-0
——————————tag————————————
[root@k8s1 ~]#
for i in `docker images |grep 'hub1'|awk '{print $1":"$2}'`;do kb=`echo $i|awk -F '/' '{print $3}'`;echo "docker tag $i k8s.gcr.io/$kb";done
docker tag
hub1.lczy.com/k8s/kube-proxy:v1.18.0
k8s.gcr.io/kube-proxy:v1.18.0
docker tag hub1.lczy.com/k8s/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag hub1.lczy.com/k8s/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag hub1.lczy.com/k8s/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag hub1.lczy.com/k8s/flannel:v0.12.0-s390x k8s.gcr.io/flannel:v0.12.0-s390x
docker tag hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le k8s.gcr.io/flannel:v0.12.0-ppc64le
docker tag hub1.lczy.com/k8s/flannel:v0.12.0-arm64 k8s.gcr.io/flannel:v0.12.0-arm64
docker tag hub1.lczy.com/k8s/flannel:v0.12.0-arm k8s.gcr.io/flannel:v0.12.0-arm
docker tag hub1.lczy.com/k8s/flannel:v0.12.0-amd64 k8s.gcr.io/flannel:v0.12.0-amd64
docker tag hub1.lczy.com/k8s/pause:3.2 k8s.gcr.io/pause:3.2
docker tag hub1.lczy.com/k8s/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag hub1.lczy.com/k8s/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
3.3 初始化測試
[root@k8s1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hub1.lczy.com/k8s/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
hub1.lczy.com/k8s/kube-apiserver v1.18.0 74060cea7f70 13 months ago 173MB
hub1.lczy.com/k8s/kube-scheduler v1.18.0 a31f78c7c8ce 13 months ago 95.3MB
hub1.lczy.com/k8s/kube-controller-manager v1.18.0 d3e55153f52f 13 months ago 162MB
hub1.lczy.com/k8s/flannel v0.12.0-s390x 57eade024bfb 13 months ago 56.9MB
hub1.lczy.com/k8s/flannel v0.12.0-ppc64le 9225b871924d 13 months ago 70.3MB
hub1.lczy.com/k8s/flannel v0.12.0-arm64 7cf4a417daaa 13 months ago 53.6MB
hub1.lczy.com/k8s/flannel v0.12.0-arm 767c3d1f8cba 13 months ago 47.8MB
hub1.lczy.com/k8s/flannel v0.12.0-amd64 4e9f801d2217 13 months ago 52.8MB
hub1.lczy.com/k8s/pause 3.2 80d28bedfe5d 14 months ago 683kB
hub1.lczy.com/k8s/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB
hub1.lczy.com/k8s/etcd 3.4.3-0 303ce5db0e90 18 months ago 288MB
主節點不必要下載--flannel--相關插件
[root@k8s1 ~]#
kubeadm init --image-repository=hub1.lczy.com/k8s --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
can not mix '--config' with arguments [image-repository]
To see the stack trace of this error execute with --v=5 or higher
報錯
直接更改config配置文件
cat kubeadm-config.yaml
源文件內容:
imageRepository: k8s.gcr.io
修改為:imageRepository: hub1.lczy.com/k8s
再次嘗試
[root@k8s1 ~]#
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0426 15:07:06.869261 13847 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "SupportIPVSProxyMode"
W0426 15:07:06.870176 13847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s1.dev.lczy.com" could not be reached
[WARNING Hostname]: hostname "k8s1.dev.lczy.com": lookup k8s1.dev.lczy.com on 192.168.168.169:53: no such host
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
報錯
排查一下,
[root@k8s1 ~]#
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active:
active (running) since Mon 2021-04-26 15:07:19 CST; 7min ago
Docs: https://kubernetes.io/docs/
[root@k8s1 ~]#
docker ps -a | grep kube | grep -v pause
f9fd62134e6c 303ce5db0e90 "etcd --advertise-cl…" About a minute ago Exited (1) About a minute ago k8s_etcd_etcd-k8s1.dev.lczy.com_kube-system_7916363d00f9b5fe860b436551739261_6
d65cd6aac4e9 74060cea7f70 "kube-apiserver --ad…" 2 minutes ago Exited (2) 2 minutes ago k8s_kube-apiserver_kube-apiserver-k8s1.dev.lczy.com_kube-system_ed56c54b98d34e8731772586f127f86f_5
b56803458bff a31f78c7c8ce "kube-scheduler --au…" 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-k8s1.dev.lczy.com_kube-system_58cabb9b5f97f8700654c0ffc7ec0696_0
a25255200f79 d3e55153f52f "kube-controller-man…" 7 minutes ago Up 7 minutes k8s_kube-controller-manager_kube-controller-manager-k8s1.dev.lczy.com_kube-system_9152e2708b669ca9a567d1606811fca2_0
[root@k8s1 ~]#
docker logs f9fd62134e6c
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-04-26 07:13:23.557913 I | etcdmain: etcd Version: 3.4.3
2021-04-26 07:13:23.557970 I | etcdmain: Git SHA: 3cf2f69b5
2021-04-26 07:13:23.557978 I | etcdmain: Go Version: go1.12.12
2021-04-26 07:13:23.557984 I | etcdmain: Go OS/Arch: linux/amd64
2021-04-26 07:13:23.557990 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-04-26 07:13:23.558112 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-04-26 07:13:23.558289 C | etcdmain: li
sten tcp 1.2.3.4:2380: bind: cannot assign requested address
初始化配置文件沒有修改ip。。。醉了
生成配置文件
kubeadm config print init-defaults > kubeadm-config.yaml
修改三處
1.
advertiseAddress: 192.168.101.86
2.
imageRepository: hub1.lczy.com/k8s
3. 紅色為新增,
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
修改玩配置需要--
kubeadm reset-順便主機名加上
[root@k8s1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.86 k8s1.dev.lczy.com
192.168.100.17 k8s2.dev.lczy.com
[root@k8s1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0426 15:23:53.087893 22435 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://1.2.3.4:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0426 15:24:40.494905 22435 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "
iptables" command.
If your cluster was setup to utilize IPVS, run
ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the
$HOME/.kube/config file
這里也幫你自動清理掉了
[root@k8s1 ~]# docker ps -a | grep kube | grep -v pause
[root@k8s1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
重新再來一次初始化
[root@k8s1 ~]#
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0426 15:27:33.169180 23145 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "SupportIPVSProxyMode"
W0426 15:27:33.170594 23145 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
(docker版本安全時要是要注意版本)
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s1.dev.lczy.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.86]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s1.dev.lczy.com localhost] and IPs [192.168.101.86 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s1.dev.lczy.com localhost] and IPs [192.168.101.86 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0426 15:27:40.648222 23145 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0426 15:27:40.649970 23145 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005147 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
1fe7e1c89deec70c5c859fddec371694c381c1986485d6850cc8daec506de3cd
[mark-control-plane] Marking the node k8s1.dev.lczy.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s1.dev.lczy.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.101.86:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cb9b5cf902055258941e9f5a794ab3e401166663d3482f5f80d6183cf3710ca1
執行成功,按照指令,執行
[root@k8s1 ~]# mkdir -p $HOME/.kube
[root@k8s1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入集群的節點
kubeadm join 192.168.101.86:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cb9b5cf902055258941e9f5a794ab3e401166663d3482f5f80d6183cf3710ca1
(會過期的哦,下一篇會介紹更新這個token的方法,也可以搜索引擎尋求答案,很多很好找)
[root@k8s1 ~]#
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-c45695cd8-98r7k 0/1 Pending 0 4m37s
kube-system coredns-c45695cd8-d6knz 0/1 Pending 0 4m37s
kube-system etcd-k8s1.dev.lczy.com 1/1 Running 0 4m46s
kube-system kube-apiserver-k8s1.dev.lczy.com 1/1 Running 0 4m46s
kube-system kube-controller-manager-k8s1.dev.lczy.com 1/1 Running 0 4m47s
kube-system kube-proxy-fgckc 1/1 Running 0 4m37s
kube-system kube-scheduler-k8s1.dev.lczy.com 1/1 Running 0 4m46s
插入篇外--從一個K8S集群中抽出一台服務器,加入另外一個集群。從節點使用 kubectl 命令,只需要拷貝 /root/.kube/config
[root@k8s-node1 ~]#
kubectl delete nodes k8s-node1
[root@k8s-node1 ~]#
kubeadm reset
·······································
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node1 ~]# rm -rf /etc/cni/net.d
[root@k8s-node1 ~]# ipvsadm --list
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@k8s-node1 ~]#
ipvsadm --clear
[root@k8s-node1 ~]#
rm -rf .kube/*
[root@k8s2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@k8s2 ~]#
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.168.168 hub.lczy.com
192.168.168.169 hub1.lczy.com
192.168.101.86 k8s1.dev.lczy.com
192.168.100.17 k8s2.dev.lczy.com
加入集群(復制時注意“
>
”不需要哦!)
[root@k8s2 ~]#
kubeadm join 192.168.101.86:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:cb9b5cf902055258941e9f5a794ab3e401166663d3482f5f80d6183cf3710ca1
W0426 15:52:57.501721 19344 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.0. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s1.dev.lczy.com NotReady master 25m v1.18.0
k8s2.dev.lczy.com NotReady <none> 11s v1.18.0
flannel文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s1 ~]#
cat kube-flannel.yml |grep image
image: quay.io/coreos/flannel:v0.12.0-amd64
image: quay.io/coreos/flannel:v0.12.0-amd64
image: quay.io/coreos/flannel:v0.12.0-arm64
image: quay.io/coreos/flannel:v0.12.0-arm64
image: quay.io/coreos/flannel:v0.12.0-arm
image: quay.io/coreos/flannel:v0.12.0-arm
image: quay.io/coreos/flannel:v0.12.0-ppc64le
image: quay.io/coreos/flannel:v0.12.0-ppc64le
image: quay.io/coreos/flannel:v0.12.0-s390x
image: quay.io/coreos/flannel:v0.12.0-s390x
vim 批量替換
vim kube-flannel.yml
:%s/quay.io\/coreos/hub1.lczy.com\/k8s/g
[root@k8s1 ~]#
cat kube-flannel.yml |grep image
image: hub1.lczy.com/k8s/flannel:v0.12.0-amd64
image: hub1.lczy.com/k8s/flannel:v0.12.0-amd64
image: hub1.lczy.com/k8s/flannel:v0.12.0-arm64
image: hub1.lczy.com/k8s/flannel:v0.12.0-arm64
image: hub1.lczy.com/k8s/flannel:v0.12.0-arm
image: hub1.lczy.com/k8s/flannel:v0.12.0-arm
image: hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
image: hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
image: hub1.lczy.com/k8s/flannel:v0.12.0-s390x
image: hub1.lczy.com/k8s/flannel:v0.12.0-s390x
[root@k8s1 ~]#
kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-c45695cd8-98r7k 1/1 Running 0 41m 10.244.0.3 k8s1.dev.lczy.com <none> <none>
kube-system coredns-c45695cd8-d6knz 1/1 Running 0 41m 10.244.0.2 k8s1.dev.lczy.com <none> <none>
kube-system etcd-k8s1.dev.lczy.com 1/1 Running 0 42m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-apiserver-k8s1.dev.lczy.com 1/1 Running 0 42m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-controller-manager-k8s1.dev.lczy.com 1/1 Running 0 42m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-flannel-ds-amd64-n4g6d 1/1 Running 0 2m31s 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-flannel-ds-amd64-xv4ms 0/1
Init:0/1 0 2m31s 192.168.100.17 k8s2.dev.lczy.com <none> <none>
kube-system kube-proxy-62v94 0/1
ContainerCreating 0 16m 192.168.100.17 k8s2.dev.lczy.com <none> <none>
kube-system kube-proxy-fgckc 1/1 Running 0 41m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-scheduler-k8s1.dev.lczy.com 1/1 Running 0 42m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
k8s2.dev.lczy.com 沒有啟動服務,查看鏡像沒有拉取。看來需要手動拉取了
本身不會自己拉取鏡像!!!本身不會自己拉取鏡像!!!本身不會自己拉取鏡像!!!
[root@k8s2 ~]# docker pull hub1.lczy.com/k8s/coredns:1.6.7
Error response from daemon: pull access denied for hub1.lczy.com/k8s/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
[root@k8s2 ~]#
cat .docker/config.json
{
"auths": {
"hub1.lczy.com": {
"auth": "cGFuaGFueGluOjEyM1FXRWFzZA=="
}
}
}[root@k8s2 ~]# rm -rf .docker/config.json
目錄被我設置了權限,更改賬戶密碼,ok
[root@k8s2 ~]# docker pull hub1.lczy.com/k8s/coredns:1.6.7
1.6.7: Pulling from k8s/coredns
Digest: sha256:695a5e109604331f843d2c435f488bf3f239a88aec49112d452c1cbf87e88405
Status: Downloaded newer image for hub1.lczy.com/k8s/coredns:1.6.7
hub1.lczy.com/k8s/coredns:1.6.7
再次執行
docker pull hub1.lczy.com/k8s/kube-proxy:v1.18.0
docker pull hub1.lczy.com/k8s/kube-controller-manager:v1.18.0
docker pull hub1.lczy.com/k8s/kube-scheduler:v1.18.0
docker pull hub1.lczy.com/k8s/kube-apiserver:v1.18.0
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-s390x
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-ppc64le
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-arm64
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-arm
docker pull hub1.lczy.com/k8s/flannel:v0.12.0-amd64
docker pull hub1.lczy.com/k8s/pause:3.2
docker pull hub1.lczy.com/k8s/coredns:1.6.7
docker pull hub1.lczy.com/k8s/etcd:3.4.3-0
[root@k8s1 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-c45695cd8-98r7k 1/1 Running 0 49m 10.244.0.3 k8s1.dev.lczy.com <none> <none>
kube-system coredns-c45695cd8-d6knz 1/1 Running 0 49m 10.244.0.2 k8s1.dev.lczy.com <none> <none>
kube-system etcd-k8s1.dev.lczy.com 1/1 Running 0 50m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-apiserver-k8s1.dev.lczy.com 1/1 Running 0 50m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-controller-manager-k8s1.dev.lczy.com 1/1 Running 0 50m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-flannel-ds-amd64-n4g6d 1/1 Running 0 10m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-flannel-ds-amd64-xv4ms 1/1
Running 2 10m 192.168.100.17 k8s2.dev.lczy.com <none> <none>
kube-system kube-proxy-62v94 1/1
Running 0 24m 192.168.100.17 k8s2.dev.lczy.com <none> <none>
kube-system kube-proxy-fgckc 1/1 Running 0 49m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
kube-system kube-scheduler-k8s1.dev.lczy.com 1/1 Running 0 50m 192.168.101.86 k8s1.dev.lczy.com <none> <none>
[root@k8s1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s1.dev.lczy.com
Ready master 110m v1.18.0
k8s2.dev.lczy.com
Ready <none> 84m v1.18.0
------END--------------
說明1:
文件
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
是變動的,每次包含的鏡像名稱不一樣,可以考慮備份或者自行修改替換鏡像為自己方便下載的鏡像名稱。
舉例:
新下載的--2021-04-27--
[root@k8s1 ~]# cat kube-flannel.yml |grep image
image: hub1.lczy.com/k8s/flannel:v0.14.0-rc1
image: hub1.lczy.com/k8s/flannel:v0.14.0-rc1
---2020年下載的---
[root@k8s1 ~]# cat kube-flannel.yml |grep image
image: quay.io/coreos/flannel:v0.12.0-amd64
image: quay.io/coreos/flannel:v0.12.0-amd64
image: quay.io/coreos/flannel:v0.12.0-arm64
image: quay.io/coreos/flannel:v0.12.0-arm64
image: quay.io/coreos/flannel:v0.12.0-arm
image: quay.io/coreos/flannel:v0.12.0-arm
image: quay.io/coreos/flannel:v0.12.0-ppc64le
image: quay.io/coreos/flannel:v0.12.0-ppc64le
image: quay.io/coreos/flannel:v0.12.0-s390x
image: quay.io/coreos/flannel:v0.12.0-s390x
只需要替換
flannel:v0.14.0-rc1 flannel:v0.12.0-amd64
說明2:從節點不會主動下載鏡像,主節點會。(我的環境是這樣的)
docker pull hub1.lczy.com/k8s/
kube-proxy:v1.18.0
docker pull hub1.lczy.com/k8s/
flannel:v0.12.0-amd64
docker pull hub1.lczy.com/k8s/
pause:3.2
只需要上面的三個鏡像即可

