最大的教訓是:遇到問題先看一下視頻,是不是跟自己操作的不一樣,如果沒發現問題,再去網上查詢。
1、對master節點:系統初始化,包括修改主機名,配置yum源,安裝依賴包,設置防火牆,關閉selinux,調整內核參數,升級內核等。
2、對master節點:部署K8s,包括配置kube-proxy,安裝docker,配置docker鏡像源,安裝kubeadm,配置各個虛擬機的靜態ip,
3、把master節點拷貝為node1和node2
初始化主節點,加入主節點以及其余節點,部署網絡
其中遇到的問題是:初始化kubeadm
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
1、kubeadm在k8s.gcr.io網站中,但是網絡長城攔截無法訪問。
2、因此配置docker的代理,因為docker實際是demon進程在拉取,所以要配置docker的全局代理,之后再重啟docker
#!/bin/bash cat >> /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF [Service] Environment="HTTP_PROXY=http://192.168.1.8:1082" Environment="HTTPS_PROXY=http://192.168.1.8:1082" EOF systemctl daemon-reload systemctl restart docker
3、最后,即使代理配置成功了,但是網速太慢,還是從其他地方拷貝的鏡像,再load進docker里面
我的代碼如下:
#!/bin/bash cd /root/kubeadm-basic.images for i in $( ls /root/kubeadm-basic.images ) do docker load -i $i done
4、初始化錯誤
此時需要刪除鏡像,參考:https://blog.csdn.net/lansye/article/details/79984077
#重置kubeadm
kubeadm reset
# 停止相關服務
systemctl stop docker kubelet etcd
#刪除所有容器
docker rm -f $(sudo docker ps -qa)
5、初始化錯誤解決:查看日志
[root@k8s-master01 ~]# journalctl -xeu kubelet 6月 26 10:47:00 k8s-master01 kubelet[29773]: E0626 10:47:00.237357 29773 kubelet.go:2169] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:d 6月 26 10:47:02 k8s-master01 kubelet[29773]: W0626 10:47:02.862214 29773 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
根據google搜索,是flannel沒配置好。
參考:https://github.com/kubernetes/kubeadm/issues/292
https://blog.csdn.net/qq_34857250/article/details/82562514
根據網上的解決方案,沒有解決,遇到問題如下:
[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yaml unable to recognize "kube-flannel.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused unable to recognize "kube-flannel.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
繼續查找到這個帖子https://github.com/kubernetes/kubeadm/issues/1031
https://github.com/kubernetes/website/pull/16575/files
沒法現問題。
最后升級為1.18.4,FQ翻不過去,無法安裝,降級會1.15.1時,發現了 kubernetes-cni-0.8.6這個依賴。
感覺這個依賴的版本可能有問題,查找原視頻,發現視頻里當時依賴的是kubernetes-cni-0.7.5,於是改為如下:
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 kubernetes-cni-0.7.5
啟動成功,打印如下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb
按照提示執行命令如下:
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# export KUBECONFIG=$HOME/.kube/config
添加網絡遇到錯誤:
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
解決如下:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml
驗證如下:
[root@k8s-master01 ~]# kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created [root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-czm6h 1/1 Running 0 5h19m coredns-5c98db65d4-wnwx5 1/1 Running 0 5h19m etcd-k8s-master01 1/1 Running 0 5h18m kube-apiserver-k8s-master01 1/1 Running 0 5h18m kube-controller-manager-k8s-master01 1/1 Running 0 5h18m kube-flannel-ds-amd64-b5xmn 1/1 Running 0 9m26s kube-proxy-xg8ch 1/1 Running 0 5h19m kube-scheduler-k8s-master01 1/1 Running 0 5h18m [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 5h20m v1.15.1
在兩個node節點分別執行
kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb
[root@k8s-node01 ~]# kubeadm join 192.168.1.10:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:494d4e8d714a490d2dd160686b023fed4766b44289e30f1c5b2b20dde21a85bb [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 8h v1.15.1 k8s-node01 Ready <none> 120m v1.15.1 k8s-node02 Ready <none> 120m v1.15.1