failed to pull image k8s.gcr.io/kube-controller-manage


 

root@ubuntu:~# kubeadm init --kubernetes-version=v1.18.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.16.82  --cri-socket /run/containerd/containerd.sock 
W1014 12:00:18.348953   26276 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.1: output: time="2020-10-14T12:02:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.18.1: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.1: output: time="2020-10-14T12:05:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-controller-manager:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-controller-manager:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.18.1: dial tcp 108.177.97.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.1: output: time="2020-10-14T12:07:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-scheduler:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-scheduler:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.18.1: dial tcp 64.233.188.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.1: output: time="2020-10-14T12:10:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-proxy:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-proxy/manifests/v1.18.1: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: time="2020-10-14T12:12:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.2: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: time="2020-10-14T12:15:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/etcd:3.4.3-0\": failed to resolve reference \"k8s.gcr.io/etcd:3.4.3-0\": failed to do request: Head https://k8s.gcr.io/v2/etcd/manifests/3.4.3-0: dial tcp 108.177.125.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: time="2020-10-14T12:17:49+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/coredns:1.6.7\": failed to resolve reference \"k8s.gcr.io/coredns:1.6.7\": failed to do request: Head https://k8s.gcr.io/v2/coredns/manifests/1.6.7: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

 

k8s 國內鏡像下載方案

 

眾所周知,國內是不太容易下載k8s.gcr.io站點的鏡像的

一、第一種方案:Azure中國鏡像站

http://mirror.azure.cn/help/gcr-proxy-cache.html

Global Proxy in China (Azure中國鏡像)
dockerhub (docker.io) dockerhub.azk8s.cn
gcr.io gcr.azk8s.cn
k8s.gcr.io gcr.azk8s.cn/google-containers
quay.io quay.azk8s.cn
#這兩條語句是等效的 docker pull k8s.gcr.io/kube-apiserver:v1.15.2 docker pull gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.2 #這兩條也是等效的 docker pull quay.io/xxx/yyy:zzz docker pull quay.azk8s.cn/xxx/yyy:zzz 

二、第二種方案:直接 pull 用戶mirrorgooglecontainers同步過的鏡像

就當前來說,用戶 mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的鏡像,先從這兒下載,然后修改 tag 即可

https://hub.docker.com/u/mirrorgooglecontainers

#這兩條也是等效的 docker pull mirrorgooglecontainers/kube-scheduler:v1.15.2 docker pull k8s.gcr.io/kube-scheduler:v1.15.2 

三、通過腳本進行批量下載

要下載鏡像的名稱,可以通過 kubeadm config images list命令獲取

[root@node-1 yum.repos.d]# kubeadm config images list --kubernetes-version=v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2 k8s.gcr.io/kube-controller-manager:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 

腳本一:通過Azure中國鏡像站進行下載

#!/bin/bash # download k8s 1.15.2 images # get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2' # gcr.azk8s.cn/google-containers == k8s.gcr.io images=( kube-apiserver:v1.15.2 kube-controller-manager:v1.15.2 kube-scheduler:v1.15.2 kube-proxy:v1.15.2 pause:3.1 etcd:3.3.10 coredns:1.3.1 ) for imageName in ${images[@]};do docker pull gcr.azk8s.cn/google-containers/$imageName docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName docker rmi gcr.azk8s.cn/google-containers/$imageName done 

腳本二:通過 Azure 中國鏡像站進行下載,執行腳本時需要指定版本

#!/bin/bash # download k8s 1.15.2 images # get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2' # gcr.azk8s.cn/google-containers == k8s.gcr.io #images=( #kube-apiserver:v1.15.2 #kube-controller-manager:v1.15.2 #kube-scheduler:v1.15.2 #kube-proxy:v1.15.2 #pause:3.1 #etcd:3.3.10 #coredns:1.3.1 #) if [ $# -ne 1 ];then echo "please user in: ./`basename $0` KUBERNETES-VERSION" exit 1 fi version=$1 images=`kubeadm config images list --kubernetes-version=${version} |awk -F'/' '{print $2}'` for imageName in ${images[@]};do docker pull gcr.azk8s.cn/google-containers/$imageName docker tag gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName docker rmi gcr.azk8s.cn/google-containers/$imageName done 

腳本三:通過用戶mirrorgooglecontainers 在 docker hub的鏡像進行下載

#!/bin/bash # download k8s 1.15.2 images # get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2' images=( kube-apiserver:v1.15.2 kube-controller-manager:v1.15.2 kube-scheduler:v1.15.2 kube-proxy:v1.15.2 pause:3.1 etcd:3.3.10 ) for imageName in ${images[@]};do docker pull mirrorgooglecontainers/$imageName docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName docker rmi mirrorgooglecontainers/$imageName done docker pull coredns/coredns:1.3.1 docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker rmi coredns/coredns:1.3.1 

 

docker pull mirrorgcrio/pause-arm64:3.2
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull coredns/coredns:coredns-arm64


docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 k8s.gcr.io/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 k8s.gcr.io/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 k8s.gcr.io/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 k8s.gcr.io/kube-proxy:v1.18.1
docker tag mirrorgcrio/pause-arm64:3.2 k8s.gcr.io/pause:3.2
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag coredns/coredns:coredns-arm64 k8s.gcr.io/coredns:1.6.7

apt-get install kubeadm=1.18.1-00 kubectl=1.18.1-00 kubelet=1.18.1-00
kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=14.14.18.6

------------------------------------------------------------------------
flannel鏡像:
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-amd64
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm64
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-ppc64le
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-s390x


docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x

kubectl create -f kube-flannel.yml

 

 

解決方案1

查詢鏡像列表
kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.17.9
k8s.gcr.io/kube-controller-manager:v1.17.9
k8s.gcr.io/kube-scheduler:v1.17.9
k8s.gcr.io/kube-proxy:v1.17.9
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:v3.3.12
k8s.gcr.io/coredns:1.6.9
下載鏡像
images=(
  kube-apiserver:v1.17.9
  kube-controller-manager:v1.17.9
  kube-scheduler:v1.17.9
  kube-proxy:v1.17.9
  pause:3.1
  etcd:v3.3.12
  coredns:1.6.9
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

 

 

 

解決方案2: docker.io倉庫對google的容器做了鏡像,可以通過下列命令下拉取相關鏡像:

復制代碼
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
 
復制代碼

版本信息需要根據實際情況進行相應的修改。通過docker tag命令來修改鏡像的標簽:

復制代碼
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18  k8s.gcr.io/etcd-amd64:3.2.18
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1.3  k8s.gcr.io/coredns:1.1.3
 
復制代碼


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM