centos7下用kubeadm安裝k8s集群並使用ipvs做高可用方案


1.准備

1.1系統配置

在安裝之前,需要先做如下准備。三台CentOS主機如下:
配置yum源(使用騰訊雲的)

替換之前先備份舊配置
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
centos各版本的源配置列表
centos5
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos5_base.repo
centos6
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos6_base.repo
centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
更新緩存
yum clean all
yum makecache

cat /etc/hosts

192.168.233.251 k8sMaster
192.168.233.170 k8sNode1
192.168.233.35  k8sNode2

關閉swap:
臨時關閉
swapoff -a
永久關閉(刪除或注釋掉swap那一行重啟即可)
vim /etc/fstab

關閉所有防火牆
systemctl stop firewalld
systemctl disable firewalld
禁用SELINUX:
setenforce 0

vi /etc/selinux/config
SELINUX=disabled

將橋接的IPv4流量傳遞到iptables的鏈:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

使設置生效
sysctl --systemmodprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy開啟ipvs的前置條件

由於ipvs已經加入到了內核的主干,所以為kube-proxy開啟ipvs的前提需要加載以下的內核模塊:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes節點node1和node2上執行以下腳本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。

在所有節點上安裝ipset軟件包
yum install ipset -y
為了方便查看ipvs規則我們要安裝ipvsadm(可選)
yum install ipvsadm -y

1.3安裝Docker(所有節點)

Kubernetes默認CRI(容器運行時)為Docker,因此先安裝Docker。
Docker/kubeadm/kubelet
配置docker國內源(阿里雲)

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

**注意如果需要安裝指定版本的docker-ce,請參考下面命令

指定docker-ce版本安裝[可選]

    查詢18.09版的docker-ce 並安裝
yum  list available docker-ce* --showduplicates|grep 18.09

docker-ce.x86_64                3:18.09.0-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.2-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.3-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.4-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.5-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.6-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.7-3.el7                 docker-ce-stable
docker-ce.x86_64                3:18.09.8-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.0-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.2-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.3-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.4-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.5-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.6-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.7-3.el7                 docker-ce-stable
docker-ce-cli.x86_64            1:18.09.8-3.el7                 docker-ce-stable

yum install -y docker-ce-18.09.8-3.el7
systemctl enable docker && systemctl start docker
docker --version

最新版docker-ce安裝

yum -y install docker-ce
systemctl enable docker && systemctl start docker
docker --version

bubernetes的源(阿里雲)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

手動導入gpgkey或者關閉 gpgcheck=0
rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
開始安裝kubeadm和kubelet:

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet

開始部署Kubernetes

初始化master

kubeadm init \
--apiserver-advertise-address=192.168.233.251\
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.14.2 \
--pod-network-cidr=10.244.0.0/16

關注輸出內容

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.233.251:6443 --token a9vg9z.dlboqvfuwwzauufq \
    --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

執行下面命令 初始化當前用戶配置 使用kubectl會用到

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安裝pod網絡插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

在各個node上執行 下面加入命令(加入集群中)
kubeadm join 192.168.233.251:6443 --token a9vg9z.dlboqvfuwwzauufq --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

檢測集群狀態

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

集群初始化如果遇到問題,可以使用下面的命令進行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

使用kubectl get pod –all-namespaces -o wide確保所有的Pod都處於Running狀態。

[root@k8smaster centos]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP                NODE                  NOMINATED NODE   READINESS GATES
kube-system   coredns-8686dcc4fd-5h9xc                      1/1     Running   0          15m     10.244.0.3        k8smaster.novalocal   <none>           <none>
kube-system   coredns-8686dcc4fd-8w6l2                      1/1     Running   0          15m     10.244.0.2        k8smaster.novalocal   <none>           <none>
kube-system   etcd-k8smaster.novalocal                      1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
kube-system   kube-apiserver-k8smaster.novalocal            1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
kube-system   kube-controller-manager-k8smaster.novalocal   1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
kube-system   kube-flannel-ds-amd64-2mfgq                   1/1     Running   0          3m34s   192.168.233.35    k8snode2.novalocal    <none>           <none>
kube-system   kube-flannel-ds-amd64-8twxz                   1/1     Running   0          3m34s   192.168.233.251   k8smaster.novalocal   <none>           <none>
kube-system   kube-flannel-ds-amd64-sbd6n                   1/1     Running   0          3m34s   192.168.233.170   k8snode1.novalocal    <none>           <none>
kube-system   kube-proxy-2m5jh                              1/1     Running   0          15m     192.168.233.251   k8smaster.novalocal   <none>           <none>
kube-system   kube-proxy-nfzfl                              1/1     Running   0          10m     192.168.233.170   k8snode1.novalocal    <none>           <none>
kube-system   kube-proxy-shxdt                              1/1     Running   0          9m47s   192.168.233.35    k8snode2.novalocal    <none>           <none>
kube-system   kube-scheduler-k8smaster.novalocal            1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>

2.4 master node參與工作負載

使用kubeadm初始化的集群,出於安全考慮Pod不會被調度到Master Node上,也就是說Master Node不參與工作負載。這是因為當前的master節點node1被打上了node-role.kubernetes.io/master:NoSchedule的污點標記:
查看污點標記

kubectl describe node k8smaster.novalocal |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

執行命令去除標記

kubectl taint nodes k8smaster.novalocal node-role.kubernetes.io/master:NoSchedule-

測試dns

[root@k8smaster centos]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-66bdcf564-4c42d:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

kube-proxy開啟ipvs

#修改ConfigMap的kube-system/kube-proxy中的config.conf,把 mode: "" 改為mode: “ipvs" 保存退出即可
[root@k8smaster centos]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
###刪除之前的proxy pod
[root@k8smaster centos]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2m5jh" deleted
pod "kube-proxy-nfzfl" deleted
pod "kube-proxy-shxdt" deleted
#查看proxy運行狀態
[root@k8smaster centos]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-54qnw                              1/1     Running   0          24s
kube-proxy-bzssq                              1/1     Running   0          14s
kube-proxy-cvlcm                              1/1     Running   0          37s
#查看日志,如果有 `Using ipvs Proxier.` 說明kube-proxy的ipvs 開啟成功!
[root@k8smaster centos]# kubectl logs kube-proxy-54qnw -n kube-system
I0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.
W0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0518 20:24:09.320035       1 server.go:562] Version: v1.14.2
I0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller
I0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0518 20:24:09.334945       1 config.go:202] Starting service config controller
I0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller
I0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller

至此安裝就差不多了.

參考:
官方:
https://k8smeetup.github.io/docs/admin/kubeadm/
關於 Taints(污點)和Tolerations(容忍)
https://blog.51cto.com/newfly/2067531
k8s安裝
https://www.kubernetes.org.cn/4956.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM