Centos7 version
- 7.9
- kernel 5.4.138-1.el7.elrepo.x86_64
kubernetes version
- v1.21.3
- 1.k8s下載coredns鏡像失敗->aliyun 無coredns:v1.8.0
- 2.k8s初始化失敗->無aliyun coredns鏡像
- 3.k8s 從節點加入集群失敗
- 4.kubectl 執行命令報“The connection to the server localhost:8080 was refused”
- 5.主節點初始化后,coredns的狀態是pending
- 6.主節點初始化后,kubectl get cs status為unhealthy
- 7.安裝網絡插件flannel - k8s version v1.21.x
- 8.主節點初始化后,flannel狀態為Init:ImagePullBackOff
- 9.從節點 NotReady 狀態
- 更多問題
1.k8s下載coredns鏡像失敗->aliyun 無coredns:v1.8.0
在安裝kubeadm/kubectl/kubelet
后,通過kubeadm config images list --kubernets-version=v1.21.3
可查看對應鏡像版本:
通過docker pull registry.aliyuncs.com/google_containers/${kube_image}:v1.21.3
下載鏡像都很順利(除coredns外),通過在docker hub
查找一番,竟然看到coredns/coredns:1.8.0
版本的鏡像,用docker pull coredns/coredns:1.8.0
拉取鏡像后,再通過docker tag命令 打成k8s.gcr.io/coredns/coredns:v1.8.0
鏡像。
問題-解決
在阿里雲的google倉庫中,沒有coredns/coredns:v1.8.0
的鏡像,這是比較坑的,因為新版本的coredns竟然改名了,腳本運行多遍發現失敗,最終在docker hub中找到對應版本的coredns
。
2.k8s初始化失敗->無aliyun coredns鏡像
在k8s init
時一直失敗,提示找不到對應的aliyun的coredns鏡像,以下是初始化命令:
version=v1.21.3
master_ip=192.168.181.xxx
POD_NETWORK=10.244.0.0
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version ${version} --apiserver-advertise-address ${master_ip} --pod-network-cidr=${POD_NETWORK}/16 --token-ttl 0
失敗提示:
注意到此處:registry.aliyuncs.com/google_containers/coredns:v1.8.0
在pull鏡像時,是沒有打aliyun的coredns的鏡像的,沒有就用現成的鏡像再打個aliyun的:
docker tag k8s.gcr.io/coredns/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
再次執行初始化命令,看到:
初始化成功,可以進入下一步操作。
3.k8s 從節點加入集群失敗
從節點加入集群命令,主節點執行:
kubeadm token create --print-join-command
> kubeadm join 192.168.181.135:6443 --token 2ihfvx.pzoqbw5fwf7ioxwb --discovery-token-ca-cert-hash sha256:4ab7fad27a2ca3d5fcca52209b887cc8b761fb8e1ff6fca1937c8a9360504d19
當從節點機器裝完docker、配置好k8s的環境(Firewall、selinux、swap、內核參數)、裝好kubeadm/kubectl/kubelet、docker load k8s_images后,在主節點添加136的從節點如下(實際應該在從節點加入):
kubeadm join 192.168.181.136:6443 --token 2ihfvx.pzoqbw5fwf7ioxwb --discovery-token-ca-cert-hash sha256:4ab7fad27a2ca3d5fcca52209b887cc8b761fb8e1ff6fca1937c8a9360504d19
# 以下是報錯問題
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
在從節點使用kubeadm join
命令加入集群環境問題如下:
原因1:集群時間未同步造成
date # 各節點都驗證后,發現確實有這個問題,各節點執行下面命令,同步時間,最好加個cron定時任務,定期獲取時間同步
yun install -y ntpdate
ntpdate cn.pool.ntp.org
crontab -e
---
*/20 * * * * /usr/bin/ntpdate -u cn.pool.ntp.org
原因2:token失效
# 主節點上,查看token list
kubeadm token list
# 主節點上,新建token
kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' # 同一集群,結果相同
# 從節點上,再次加入集群 192.168.181.135:6443 一定是主節點6443集群
kubeadm join 192.168.181.135:6443 --token mct6a0.vzwbybzryu70tb17 --discovery-token-ca-cert-hash sha256:4ab7fad27a2ca3d5fcca52209b887cc8b761fb8e1ff6fca1937c8a9360504d19
原因3:防火牆沒關閉
# 這種情況一般不會,因為之前在設置k8s環境時,我們通常一同關閉防火牆
# 從節點,檢查主機端口開放情況
nv -vz 192.168.181.135 6443
# 主節點從節點,如果連接拒絕,檢查防火牆
firewall-cmd --state
# 主節點從節點,關閉防火牆
systemctl stop firewalld.service
# 主節點從節點,關閉開機啟動
systemctl disable firewalld.service
# 從節點,再次加入集群
kubeadm join xxx
4.kubectl 執行命令報“The connection to the server localhost:8080 was refused”
kubectl get pods
主節點admin.conf -> 從節點
主節點:
#復制admin.conf,請在主節點服務器上執行此命令
scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf node2:/etc/kubernetes/admin.conf
從節點:
#設置kubeconfig文件
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
5.主節點初始化后,coredns的狀態是pending
kubectl get pods --all-namespaces
根據圖中結果可知,coredns
一直處於pendding
狀態,
原因1:master上的flannel鏡像拉取失敗,導致獲取不到解析的IP
解決方法:
參考解決flannel鏡像拉取部分
原因2:沒有本地解析,所以coredns才是pending
解決方法:編輯/etc/hosts文件,加入集群各個節點ip hostname
6.主節點初始化后,kubectl get cs
status為unhealthy
[root@master kubernetes]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
編輯/etc/kubernetes/manifests下的kube-controller-manager.yaml
& kube-scheduler.yaml
,找到port=0那行,注釋即可,隨后再次查看自動變為healthy狀態:
[root@master manifests]# vi kube-controller-manager.yaml
[root@master manifests]# vi kube-scheduler.yaml
[root@master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
7.安裝網絡插件flannel - k8s version v1.21.x
export POD_SUBNET=10.100.0.0/16
kubectl apply -f https://kuboard.cn/install-script/v1.21.x/calico-operator.yaml
wget https://kuboard.cn/install-script/flannel/flannel-v0.14.0.yaml
sed -i "s#10.244.0.0/16#${POD_SUBNET}#" flannel-v0.14.0.yaml
kubectl apply -f ./flannel-v0.14.0.yaml
8.主節點初始化后,flannel狀態為Init:ImagePullBackOff
- https://blog.csdn.net/qq_43442524/article/details/105298366
- https://hub.docker.com/u/xwjh ---> k8s v1.21.3全套鏡像
使用上一問題flannel后,
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-59d64cd4d4-6ggnb 0/1 Pending 0 3h49m
coredns-59d64cd4d4-cn27q 0/1 Pending 0 3h49m
etcd-master 1/1 Running 0 3h49m
kube-apiserver-master 1/1 Running 0 3h49m
kube-controller-manager-master 1/1 Running 0 40m
kube-flannel-ds-68xtw 0/1 Init:ImagePullBackOff 0 21m
kube-proxy-dffm5 1/1 Running 0 3h49m
kube-scheduler-master 1/1 Running 0 39m
flannel狀態為Init:ImagePullBackOff
原因
查看flannel-v0.14.0.yaml文件時發現quay.io/coreos/flannel:v0.14.0
line169/183
quay.io
網站目前國內無法訪問
下載flannel:v0.14.0導入到docker中
主節點操作
docker pull xwjh/flannel:v0.14.0 # 感謝這位docker hub 的兄弟,解決國內無法下載flannel鏡像問題, https://hub.docker.com/u/xwjh
docker tag xwjh/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.0
docker rmi xwjh/flannel:v0.14.0
kubectl get pod -n kube-system
9.從節點 NotReady 狀態
參考:
當從節點全部加入集群后,查看集群狀態:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 4h46m v1.21.3
node1 Ready <none> 26m v1.21.3
node2 NotReady <none> 14s v1.21.3
# node2節點為 notready狀態,查看pod信息
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-59d64cd4d4-6ggnb 1/1 Running 0 4h26m
coredns-59d64cd4d4-cn27q 1/1 Running 0 4h26m
etcd-master 1/1 Running 0 4h27m
kube-apiserver-master 1/1 Running 0 4h27m
kube-controller-manager-master 1/1 Running 0 77m
kube-flannel-ds-68xtw 1/1 Running 0 59m
kube-flannel-ds-mlbg5 1/1 Running 0 6m52s
kube-flannel-ds-pd2tr 0/1 Init:0/1 0 5m9s
kube-proxy-52lwk 1/1 Running 0 6m52s
kube-proxy-dffm5 1/1 Running 0 4h26m
kube-proxy-f29m9 1/1 Running 0 5m9s
kube-scheduler-master 1/1 Running 0 77m
# 由上可知,某個節點的flannel一直是init狀態,
解決策略
- 重啟node2部分kubelet/docker,
- 刪除運行容器,
- 重新加入集群,
具體參考給出的鏈接,可由從節點處開始操作,對主節點不必重啟docker,驗證可行。
更多問題
參考: