k8s reset之后徹底清除上次初始化
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/*
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/*
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
之后重新kubeadm init
3. journalctl -u kubelet 查看kubectl日志發現報錯如下
Kubernetes啟動報錯
kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
錯誤原因:
docker和k8s使用的cgroup不一致導致
解決辦法:
修改二者一致,統一使用systemd或者cgroupfs進行資源管理。由於k8s官方文檔中提示使用cgroupfs管理docker和k8s資源,而使用systemd管理節點上其他進程資源在資源壓力大時會出現不穩定,因此推薦修改docker和k8s統一使用systemd管理資源。
Cgroup drivers
When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It’s possible to configure your container runtime and the kubelet to use cgroupfs. Using cgroupfs alongside systemd means that there will then be two different cgroup managers.
Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use cgroupfs for the kubelet and Docker, and systemd for the rest of the processes running on the node becomes unstable under resource pressure.
docker修改方法:
修改或創建/etc/docker/daemon.json,加入下面的內容:
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
重啟docker:
systemctl restart docker
k8s修改方法:
修改kubelet:
修改docker,只需在/etc/docker/daemon.json中,添加"exec-opts": ["native.cgroupdriver=systemd"]即可,本文最初的docker配置可供參考。
修改kubelet:
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --hostname-override=10.249.176.86 --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1"
添加如下內容--cgroup-driver=systemd
需要重啟 kubelet:
systemctl daemon-reload
systemctl restart kubelet
4.kubernetes認證namespace,默認default的namespace變成操作自定義namespace yujia-k8s
示例:kubectl get pod -n yujia-k8s <<---等價於-->> kubectl get pod
vim /root/.kube/config
contexts:的下面加上自己想要修改的namespace
contexts:
#- context:
# cluster: kubernetes
# user: kubernetes-admin
# name: kubernetes-admin@kubernetes
#current-context: kubernetes-admin@kubernetes
- context:
cluster: kubernetes
namespace: yujia-k8s
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
這樣做能讓我們操作默認default的namespace變成操作yujia-k8s 命名空間下面的資源,提升了簡便性
5.node節點加不進去
'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused
kubelet沒有這個文件
open /var/lib/kubelet/pki/kubelet.crt: no such file or directory
解決辦法:復制其他node節點的這個證書
————————————————
版權聲明:本文為CSDN博主「翟雨佳blogs」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/yujia_666/article/details/107719919