首先准備5台centos7 ecs實例最低要求2c4G 開啟SLB(私網)
這里我們采用堆疊拓撲的方式構建高可用集群,因為k8s 集群etcd采用了raft算法保證集群一致性,所以高可用必須保證至少3台master+2work
master01 172.26.0.1 master01 172.26.0.2 master01 172.26.0.3 work01 172.26.0.4 work02 172.26.0.5 slb 172.26.0.99
首先在每台機器上執行以下腳本,這段腳本將會幫助你初始安裝docker+k8s三件套:
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce docker-ce-cli containerd.io systemctl start docker.service systemctl enable docker.service cat>>/etc/docker/daemon.json<<EOF { "registry-mirrors": ["https://你的鏡像加速器地址"] } EOF systemctl daemon-reload systemctl restart docker.service cat>>/etc/yum.repos.d/kubernetes.repo<<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3 systemctl enable kubelet.service
由於kubernetes官方宣布在1.20以后將逐步棄用docker,所以目前新增了基於containerd作為標准OCI實現k8s集群
yum install -y yum-utils libseccomp yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y containerd containerd config default > /etc/containerd/config.toml systemctl enable containerd systemctl start containerd sed -i 's:k8s.gcr.io/pause:registry.aliyuncs.com/google_containers/pause:g' /etc/containerd/config.toml systemctl daemon-reload systemctl restart containerd cat>>/etc/yum.repos.d/kubernetes.repo<<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF yum install -y kubelet-1.17.4 kubeadm-1.17.4 kubectl-1.17.4 systemctl enable kubelet.service setenforce 0 modprobe br_netfilter echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 > /proc/sys/net/ipv4/ip_forward kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=cri --kubernetes-version=1.19.4
可選安裝(替代docker命令) VERSION="v1.19.0" wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-$VERSION-linux-amd64.tar.gz echo "runtime-endpoint: unix:///run/containerd/containerd.sock" > /etc/crictl.yaml
接着我們進入master01修改hosts加入k8sapi地址(這里是實現高可用的重點)並初始化集群:
cat>>/etc/hosts<<EOF 172.26.0.1 k8sapi EOF kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=cri --control-plane-endpoint "k8sapi:6443" --kubernetes-version=1.17.3 mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
初始化成功后,我們需要將slb映射到master01的tcp:6443端口上並將master01生成的證書拷貝到02,03兩台機器上,登錄02 03分別執行:
cat>>/etc/hosts<<EOF 172.26.0.99 k8sapi EOF mkdir /etc/kubernetes/pki/ mkdir /etc/kubernetes/pki/etcd
重新登錄01並執行:
cd /etc/kubernetes/pki/ scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@172.26.0.2:/etc/kubernetes/pki/ scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@172.26.0.3:/etc/kubernetes/pki/ cd etcd scp ca.crt ca.key root@172.26.0.2:/etc/kubernetes/pki/etcd/ scp ca.crt ca.key root@172.26.0.3:/etc/kubernetes/pki/etcd/
再次登錄02 03並分別執行:
kubeadm join k8sapi:6443 --token xxx --discovery-token-ca-cert-hash sha256 :xxx --control-plane mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
登錄04 05 並執行:
cat>>/etc/hosts<<EOF 172.26.0.99 k8sapi EOF kubeadm join k8sapi:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx
最后重新登錄到02 03 修改etc/hosts k8sapi指向各自的內網IP地址並且將slb增加02 03的端口映射。這時候在任意master節點執行安裝網絡插件:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
//需要注意的是最好修改一下flannel的cpu和內存limit。否則容易引發flannel outofmemory導致pod重啟不了引發網絡阻塞
kubectl edit daemonset.apps/kube-flannel-ds -n kube-system -o yaml
最終效果如下:
$kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready master 92m v1.17.3 master02 Ready <none> 50m v1.17.3 master03 Ready master 51m v1.17.3 worker01 Ready master 77m v1.17.3 worker02 Ready <none> 50m v1.17.3