k8s刪除節點后再重新添加進去(踩坑)


開啟本地集群,發現一台節點出問題了,想刪除再換一台節點,結果就踩坑了,還好本地有好幾套環境。

再master節點執行以下命令

[root@k8s-master ~]# kubectl drain k8s-node01 --delete-local-data --force --ignore-daemonsets
node/k8s-node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-jz65p, kube-system/kube-proxy-92x28
evicting pod "coredns-58cc8c89f4-88ns2"
evicting pod "nginx-document1-854875f78d-8cbxw"
pod/coredns-58cc8c89f4-88ns2 evicted
pod/nginx-document1-854875f78d-8cbxw evicted
node/k8s-node01 evicted
[root@k8s-master ~]# kubectl delete node  k8s-node01
node "k8s-node01" deleted
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   3d4h   v1.16.1
k8s-node02   Ready    <none>   3d3h   v1.16.1
[root@k8s-master ~]#

在node節點執行(網上有人說運行,沒有指定那個環境,我直接在master運行了,導致一套集群出問題了)

[root@k8s-node01 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1219 20:47:15.318403   89075 reset.go:96] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "k8s-node01" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1219 20:47:19.276564   89075 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

 注意:重新在K8s上獲取Token

[root@k8s-master pki]# kubeadm token create --print-join-command
kubeadm join 192.168.180.121:6443 --token p08kmc.nci5h0xfmlcw92vg     --discovery-token-ca-cert-hash sha256:44315d59e08f4d94bc75d20730b861818dfeda6517c1b228399f061f4256329b

子節點重新加入

[root@k8s-node01 lib]# kubeadm join 192.168.180.121:6443 --token p08kmc.nci5h0xfmlcw92vg     --discovery-token-ca-cert-hash sha256:44315d59e08f4d94bc75d20730b861818dfeda6517c1b228399f061f4256329b
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master pki]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   3d4h    v1.16.1
k8s-node01   Ready    <none>   2m35s   v1.16.1
k8s-node02   Ready    <none>   3d3h    v1.16.1
[root@k8s-master pki]#


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM