k8s報錯解決


1、


Jul 18 02:25:58 lab3 etcd[5649]: the server is already initialized as member before, starting as etcd member...



https://www.cnblogs.com/ericnie/p/6886016.html

[root@lab3 k8s]# systemctl start etcd
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details

[root@lab3 k8s]# journalctl -xe

Jul 18 02:25:58 lab3 etcd[5649]: the server is already initialized as member before, starting as etcd member...



核心語句

raft save state and entries error: open /var/lib/etcd/default.etcd/member/wal/0.tmp: is a directory

解決:

進入相關目錄,刪除0.tmp,然后就可以啟動啦!



刪除后,把node3  上的配置的目錄全部刪除,然后再重新配置。





2、


WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

[root@lab1 ~]# systemctl status kube-scheduler -l
● kube-scheduler.service - Kubernetes Scheduler Plugin
   Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Thu 2018-07-19 01:49:06 EDT; 13min ago
     Docs: https://github.com/kubernetes/kubernetes
  Process: 13107 ExecStart=/usr/local/kubernetes/bin/kube-scheduler $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBECONFIG $KUBE_SCHEDULER_ARGS (code=exited, status=1/FAILURE)
 Main PID: 13107 (code=exited, status=1/FAILURE)

Jul 19 01:49:06 lab1 systemd[1]: kube-scheduler.service: main process exited, code=exited, status=1/FAILURE
Jul 19 01:49:06 lab1 kube-scheduler[13107]: W0719 01:49:06.562968   13107 options.go:148] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.



原因:沒有仔細按照文檔操作,文檔配置這一步:生成kubeconfig,我是全部復制進去了,其實分開了好多小步驟,

解決:重新安裝,每一步都要做,不要省事,





3、



問題:


[root@lab1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Thu 2018-07-19 21:38:57 EDT; 3s ago
     Docs: https://github.com/kubernetes/kubernetes
  Process: 3243 ExecStart=/usr/local/kubernetes/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_CONFIG $KUBELET_HOSTNAME $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS (code=exited, status=255)
 Main PID: 3243 (code=exited, status=255)



解決:node節點也安裝k8s文件,文檔在node加點沒有安裝k8s ,所以報錯


cd /server/software/k8s
wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
mkdir -pv /usr/local/kubernetes-v1.11.0/bin
cp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl /usr/local/kubernetes-v1.11.0/bin
ln -sv /usr/local/kubernetes-v1.11.0 /usr/local/kubernetes
cp /usr/local/kubernetes/bin/kubectl /usr/local/bin/kubectl
kubectl version
cd $HOME





4、

問題:

[root@lab2 k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?



解決:

方法一:

[root@lab2 kubernetes]# export KUBECONFIG=/etc/kubernetes/admin.conf        # 這句話是加授權的意思,


方法二:

把master節點的配置文件admin.conf  復制到 node節點的 /etc/kubernetes/

然后執行:

rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get no
 






5、

報錯:

[root@lab1 flannel]# kubectl get pods -n kube-system
NAME                    READY     STATUS         RESTARTS   AGE
kube-flannel-ds-4hdsh   0/1       ErrImagePull   0          1m
kube-flannel-ds-7gmwt   0/1       ErrImagePull   0          1m
kube-flannel-ds-cbk5z   0/1       ErrImagePull   0          1m
[root@lab1 flannel]# 


解決:等一會,這個啟動比較慢,上次啟動沒起來,嚇一跳,過了幾分鍾就running起來,

[root@lab1 flannel]# kubectl get pods -n kube-system
NAME                    READY     STATUS    RESTARTS   AGE
kube-flannel-ds-4hdsh   1/1       Running   0          6m
kube-flannel-ds-7gmwt   1/1       Running   0          6m
kube-flannel-ds-cbk5z   1/1       Running   0          6m




6、


coredns  無法啟動,


[root@lab1 coredns]# kubectl get pods -n kube-system
NAME                       READY     STATUS              RESTARTS   AGE
coredns-6975654877-d6q9z   0/1       ContainerCreating   0          21s
coredns-6975654877-k48wq   0/1       ContainerCreating   0          21s
kube-flannel-ds-d2tff      1/1       Running             0          3m
kube-flannel-ds-qnnpg      1/1       Running             0          3m
kube-flannel-ds-t2pxx      1/1       Running             0          3m




解決:


配置使用flannel網絡kube-flannel.yml,這步要修改網卡,把kube-flannel.yml  里面的- --iface=eth1 修改成自己本機的網卡



   





7、

[root@lab1 ~]# systemctl status etcd
Aug 16 17:01:07 lab1 etcd[9526]: failed to dial d35b4e3738b04cd7 on stream MsgApp v2 (dial tcp 10.1.1.111:2380: getsockopt:...efused)

解決: master的防火牆沒有關, 關掉就可以




8、

[root@lab1 ~]# kubectl get no
Unable to connect to the server: Forbidden


解決:

實在找不到原因,重啟這三台 就好



9、

問題: 創建flnal 后 一會runing  一會掛掉

解決:

安裝kube-kube-proxy ,不要選擇ipvs模式,centos7環境,ipvs模式在1.11.0不行, 在1.11.1之后就ok了




10、

下面報錯,與此同時,測試的數據庫也出現了mysql連接問題。


[root@lab2 ~]# kubectl get no
E0828 11:06:56.233812    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.235504    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.235505    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.236281    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.236765    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.236772    2504 round_trippers.go:169] CancelRequest not implemented
E0828 11:06:56.237298    2504 round_trippers.go:169] CancelRequest not implemented



解決:

[root@lab1 ~]# 什么都不管用, 第一種情況是雲主機之間不暢通,  第二種情況是被黑了,  第一種情況占的大









11、

pod一直處理ContainerCreating 狀態

[root@node2 coredns]# kubectl get po -n kube-system
NAME                       READY     STATUS              RESTARTS   AGE
coredns-55f86bf584-4rzwj   0/1       ContainerCreating   0          8s
coredns-55f86bf584-dp8gp   0/1       ContainerCreating   0          8s


解決:

http://www.mamicode.com/info-detail-2310522.html

查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt (該鏈接就是上圖中的說明) 是一個軟鏈接,但是鏈接過去后並沒有真實的/etc/rhsm,所以需要使用yum安裝:
yum install *rhsm* -y

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

這兩個命令會生成/etc/rhsm/ca/redhat-uep.pem文件.

重啟docker
systemctl restart docker


[root@node2 coredns]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
latest: Pulling from rhel7/pod-infrastructure
26e5ed6899db: Pull complete 
66dbe984a319: Pull complete 
9138e7863e08: Pull complete 
Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931
Status: Image is up to date for registry.access.redhat.com/rhel7/pod-infrastructure:latest

[root@node2 coredns]# kubectl delete -f .
[root@node2 coredns]# kubectl create -f .
[root@node2 coredns]# kubectl get po -n kube-system
NAME                       READY     STATUS    RESTARTS   AGE
coredns-55f86bf584-4rzwj   1/1       Running   0          5m
coredns-55f86bf584-dp8gp   1/1       Running   0          5m






12、

pod一直處理terminating 狀態

使用命令

kubectl delete pods --all --grace-period=0 –force

強制刪除


重啟 kube-apiserver

重啟 kubelet  docker   實在不行就重啟系統

重啟后如果發現還在就再強制刪除

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM