gpg: no valid OpenPGP data found. 解決辦法
待做。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
卡助在這
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
因為無法訪問谷歌,所以會卡在curl這一步。那么解決方案當然就是翻牆。
我這里國外服務器上安裝有SS,如果沒有SS服務器,直接用ssh tunnel應該是可以的。有ss的話本地需要使用ss local。所以建議使用docker來運行ss,這樣用完即刪,很方便。
我用的鏡像是:https://hub.docker.com/r/mritd/shadowsocks/
准備好國外服務器后,在國內服務器上安裝tsocks
1
|
apt install -y tsocks
|
編輯 vi /etc/tsocks.conf
1
2
3
|
server = 127.0.0.1
server_type = 5
server_port = 1080
|
server是本地Ip
server_type 5 就是socks5的意思
server_port 本地代理端口,跟ss_local一樣就行。
准備好梯子后,開始安裝軟件
1
2
3
4
5
6
7
|
apt-get update && apt-get install -y apt-transport-https
tsocks curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
tsocks apt-get update
tsocks apt-get install -y kubelet kubeadm kubectl
|
這里要注意一個問題是,如果國內用的是阿里雲服務器,那么源可能用的是內網地址,這樣直接用tsocks執行apt update會出下面這樣的錯誤
W: The repository ‘http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates Release’ does not have a Release file.
或者
Err:9 http://mirrors.cloud.aliyuncs.com/ubuntu xenial Release
Connection failed
這時候 vi /etc/apt/source.list.d/source.aliyun.list
將所有 http://mirrors.cloud.aliyuncs.com改為http://mirrors.aliyun.com就能使用代理update,改之前可以備份一個,用完再改回內網。
安裝好軟件后,會卡在kubeadm init,使用tsocks kubeadm init並不能解決問題。
unable to get URL “https://dl.k8s.io/release/stable-1.9.txt”: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.9.txt: dial tcp 172.217.160.112:443: i/o timeout
這里我們指定kubernetes vuersion來跳過這個,
kubeadm init –kubernetes-version=1.9.3
如果沒有提前准備鏡像,一般會卡在這里
[init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: – The kubelet is not running – The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) – There is no internet connection, so the kubelet cannot pull the following control plane images: – gcr.io/google_containers/kube-apiserver-amd64:v1.9.3 – gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3 – gcr.io/google_containers/kube-scheduler-amd64:v1.9.3
所以我們需要提前准備好鏡像。我使用的辦法是在那台國外服務器上pull下鏡像,再push到hub.docker.com,然后再從hub.docker.com pull到國內服務器。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
#!/bin/bash
ARCH=amd64
version=v1.9.3
username=<username>
#https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
images=(kube-apiserver-${ARCH}:${version} \
kube-controller-manager-${ARCH}:${version} \
kube-scheduler-${ARCH}:${version} \
kube-proxy-${ARCH}:${version} \
etcd-${ARCH}:3.1.11 \
pause-${ARCH}:3.0 \
k8s-dns-sidecar-${ARCH}:1.14.7 \
k8s-dns-kube-dns-${ARCH}:1.14.7 \
k8s-dns-dnsmasq-nanny-${ARCH}:1.14.7 \
)
docker login -u $username -p <password>
for image in ${images[@]}
do
docker pull k8s.gcr.io/${image}
docker tag k8s.gcr.io/${image} ${username}/${image}
docker push ${username}/${image}
docker rmi k8s.gcr.io/${image}
docker rmi ${username}/${image}
done
unset ARCH version images username
|
這是寫好的腳本,在國外服務器運行,將其中的<username>和<password>換成你的hub.docker.com帳號密碼就行。
在國內再運行下面這個腳本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
#!/bin/bash
ARCH=amd64
version=v1.9.3
username=<username>
#https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
images=(kube-apiserver-${ARCH}:${version} \
kube-controller-manager-${ARCH}:${version} \
kube-scheduler-${ARCH}:${version} \
kube-proxy-${ARCH}:${version} \
etcd-${ARCH}:3.1.11 \
pause-${ARCH}:3.0 \
k8s-dns-sidecar-${ARCH}:1.14.7 \
k8s-dns-kube-dns-${ARCH}:1.14.7 \
k8s-dns-dnsmasq-nanny-${ARCH}:1.14.7 \
)
for image in ${images[@]}
do
docker pull ${username}/${image}
#docker tag ${username}/${image} k8s.gcr.io/${image}
docker tag ${username}/${image} gcr.io/google_containers/${image}
docker rmi ${username}/${image}
done
unset ARCH version images username
|
這樣,就把kubernetes需要的鏡像都准備好了。再執行init就不會有問題了。
另外有一個小技巧,在init的過程中,另開一個終端,運行
journalctl -f -u kubelet.service
可以查看具體是什么願意卡住了。