Kubernetes kubeadm 安裝記錄
注:比較亂,都是一些預見到的錯誤
- kubernetes yum 源
cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
- 遇到下面的錯誤,不用管,先不啟動kubelet,init之后會有的
unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
kubeadm init \
--kubernetes-version=v1.13.0 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.233.140 \
--ignore-preflight-errors=Swap
- etcd啟動遇到如下問題
etcdmain: open /etc/kubernetes/pki/etcd/peer.crt: permission denied
關閉selinux
[root@localhost ~]# setenforce 0 //臨時關閉
[root@localhost ~]# getenforce
Permissive
[root@localhost ~]# vim /etc/sysconfig/selinux //永久關閉
# 將SELINUX=enforcing 改為 SELINUX=disabled 。
- kubeadm init如果遇到下面的問題
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster
#---------
提高kubernetes的版本
- 重置 安裝失敗的kubeadm init
$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
- init之后如果kubectl get cs 返回如下錯誤
The connection to the server localhost:8080 was refused - did you specify the right host or port?
#或者下面的錯誤
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
#--------
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 記住下面的命令
kubeadm join 192.168.233.140:6443 --token u80nom.6cqe1vomk37a2use --discovery-token-ca-cert-hash sha256:ed1e75a3aacfa74d0afcdb0b0035227cf7b06f93e31292cac0121426f291c9e4
# -----------
#token忘記了怎么辦
kubeadm token create
# sha256忘記了怎么辦
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
- not ready describe后查看到下面錯誤
cni config uninitialized
# ----
# 找下面兩個位置更改
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
修改 /var/lib/kubelet/kubeadm-flags.env 文件刪除 network的配置
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
# 重啟kubelet
#加載ipvs相關內核模塊
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
#設置ipvs開機啟動
cat <<EOF >> /etc/rc.local
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
EOF
#配置kubelet使用國內鏡像
DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
echo $DOCKER_CGROUPS
cat <<EOF> /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF
#啟動kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl restart kubelet
- 創建nginx pod
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- 刪除pod一定要刪除rc,不需要刪除pod,會自動刪除,如果執行刪除pod還會創建
kubectl delete rc nginx-controller
- node(s) had taints that the pod didn't tolerate
# kubernetes出於安全考慮默認情況下無法在master節點上部署pod
# kubectl taint nodes --all node-role.kubernetes.io/master-
- 讓dashboard可以被外面訪問
kubectl proxy --address=192.168.233.140 --disable-filter=true
# ip是master的ip
- 顯示kube-system的pod
kubectl get pods --all-namespaces
- describe kube-system的pod
kubectl describe pod -n kube-system etcd-master
kubectl --namespace kube-system logs kube-flannel-ds-amd64-c7rfz
- 授予Dashboard賬戶集群管理權限
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-ddskx
- 修改dashboard 的type
kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
https://www.cnblogs.com/klvchen/p/9963642.html
https://blog.csdn.net/u012375924/article/details/78987263
- [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
# 修改文件 /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# 添加上面兩句
sysctl -p /etc/sysctl.d/k8s.conf
# 修改 vim /usr/lib/sysctl.d/00-system.conf
net.ipv4.ip_forward=1
#添加上面這句
systemctl restart network
注:這個錯誤路由異常問題,我是在reset之后出現的這個問題。我是上面兩步都需要執行才好使
