Kubernetes作為容器集群系統,通過健康檢查+重啟策略實現了Pod故障自我修復能力,通過調度算法實現將Pod分布式部署,並保持預期副本數,根據Node失效狀態自動在其他Node拉起Pod,實現了應用層的高可用性。
針對Kubernetes集群,高可用性還應包含以下兩個層面的考慮:Etcd數據庫的高可用性和Kubernetes Master組件的高可用性。而Etcd我們已經采用3個節點組建集群實現高可用,本節將對Master節點高可用進行說明和實施。
Master節點扮演着總控中心的角色,通過不斷與工作節點上的Kubelet進行通信來維護整個集群的健康工作狀態。如果Master節點故障,將無法使用kubectl工具或者API做任何集群管理。
Master節點主要有三個服務kube-apiserver、kube-controller-mansger和kube-scheduler,其中kube-controller-mansger和kube-scheduler組件自身通過選擇機制已經實現了高可用,所以Master高可用主要針對kube-apiserver組件,而該組件是以HTTP API提供服務,因此對他高可用與Web服務器類似,增加負載均衡器對其負載均衡即可,並且可水平擴容。
根據服務器整體規划和測試資源的限制,這次我們高可用涉及的三台服務器部署(橙色字體):
二進制單master部署可以參考:https://www.cnblogs.com/huanglingfa/p/13773234.html
角色 |
IP |
組件 |
k8s-master-1 |
192.168.10.160 |
kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
k8s-master-2 |
192.168.10.166 |
kube-apiserver,kube-controller-manager,kube-scheduler |
k8s-node-1 |
192.168.10.161 |
kubelet,kube-proxy,docker etcd |
k8s-node-2 |
192.168.10.162 |
kubelet,kube-proxy,docker,etcd |
K8s-lb-master |
192.168.10.164 ,192.168.10.168 (VIP) |
Nginx L4 ,keepalived |
K8s-lb-backup |
192.168.10.165 |
Nginx L4, keepalived |
多Master架構圖:
基礎優化
1、時間同步 echo "#time sync by fage at 2020-7-22" >>/var/spool/cron/root && echo "*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1" >>/var/spool/cron/root && systemctl restart crond.service 2、關閉防火牆和selinux systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -i s#SELINUX=enforcing#SELINUX=disable#g /etc/selinux/config 3、更改主機名 hostname k8s-master-2 echo "k8s-master-2" >/etc/hostname hostname k8s-lb-A echo "k8s-lb-A" >/etc/hostname hostname k8s-lb-B echo " k8s-lb-B" >/etc/hostname 4、更改hosts文件 cat >/etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.160 k8s-master-1 192.168.10.161 k8s-node-1 192.168.10.162 k8s-node-2 192.168.10.163 k8s-node-3 192.168.10.164 k8s-lb-A 192.168.10.165 k8s-lb-B 192.168.10.166 k8s-master-2 EOF 5、節點node要禁用swap設備 不禁用要配置聲明 swapoff -a sed -i "s@/dev/mapper/centos-swap swap@#/dev/mapper/centos-swap swap@g" /etc/fstab 6、將橋接的IPv4流量傳遞到iptables的鏈: #注意nginx不做 cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system #設置時區 \cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime systemctl restart crond.service
提示:如果可以的話集群master與node做master連接node的ssh 免密,操作起來會比較方便
一、安裝Docker
#直接復制master-1上的docker文件到master-2上 cd /usr/bin/ scp -r /usr/lib/systemd/system/docker.service root@192.168.10.166:/usr/lib/systemd/system/ scp -r containerd containerd-shim docker dockerd docker-init docker-proxy runc root@192.168.10.166:/usr/bin/ scp -r /etc/docker root@192.168.10.166:/etc/ systemctl daemon-reload && systemctl start docker &&systemctl enable docker #或者使用yum鏡像安裝 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker docker --version #docker鏡像加速器 mkdir -p /etc/docker cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker.service docker info
二、部署Master-2 Node(192.168.10.166)
Master-2 與已部署的Master1所有操作一致。所以我們只需將Master-1所有K8s文件拷貝過來,再修改下服務器IP和主機名啟動即可。
1. 創建etcd證書目錄
在Master-2創建etcd證書目錄:
mkdir -p /opt/etcd/ssl
2. 拷貝文件(Master1操作)
拷貝Master-1上所有K8s文件和etcd證書到Master-2:
scp -r /opt/kubernetes root@192.168.10.166:/opt scp -r /opt/cni/ root@192.168.10.166:/opt scp -r /opt/etcd/ssl root@192.168.10.166:/opt/etcd scp /usr/lib/systemd/system/kube* root@192.168.10.166:/usr/lib/systemd/system scp /usr/bin/kubectl root@192.168.10.166:/usr/bin
3. 刪除證書文件(master-2操作)
刪除kubelet證書和kubeconfig文件:
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig rm -f /opt/kubernetes/ssl/kubelet*
4. 修改配置文件IP和主機名(master-2操作)
修改apiserver、kubelet和kube-proxy配置文件為本地IP:
vi /opt/kubernetes/cfg/kube-apiserver.conf ... --bind-address=192.168.10.166 \ --advertise-address=192.168.10.166 \ ... vi /opt/kubernetes/cfg/kubelet.conf --hostname-override=k8s-master-2 vi /opt/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-master-2
5. 啟動設置開機啟動(master-2操作)
systemctl daemon-reload systemctl start kube-apiserver systemctl start kube-controller-manager systemctl start kube-scheduler systemctl start kubelet systemctl start kube-proxy systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler systemctl enable kubelet systemctl enable kube-proxy
6. 查看集群狀態(k8s-master-2查看)
kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
7. 批准kubelet證書申請(Master1操作)
kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU 12m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready <none> 34h v1.18.6 k8s-master2 Ready <none> 13m v1.18.6 k8s-node1 Ready <none> 33h v1.18.6 k8s-node2 Ready <none> 33h v1.18.6
三、 部署Nginx負載均衡器
kube-apiserver高可用架構圖:
- Nginx是一個主流Web服務和反向代理服務器,這里用四層實現對apiserver實現負載均衡。
- Keepalived是一個主流高可用軟件,基於VIP綁定實現服務器雙機熱備,在上述拓撲中,Keepalived主要根據Nginx運行狀態判斷是否需要故障轉移(偏移VIP),例如當Nginx主節點掛掉,VIP會自動綁定在Nginx備節點,從而保證VIP一直可用,實現Nginx高可用。
1. 安裝軟件包(k8s-lb主/備)
yum install epel-release -y yum install nginx keepalived -y
2. Nginx配置文件(k8s-lb主/備一樣)
cat > /etc/nginx/nginx.conf << "EOF" user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } # 四層負載均衡,為兩台Master apiserver組件提供負載均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.10.160:6443; # Master1 APISERVER IP:PORT server 192.168.10.166:6443; # Master2 APISERVER IP:PORT } server { listen 6443; proxy_pass k8s-apiserver; } } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80 default_server; server_name _; location / { } } } EOF
3. keepalived配置文件(k8s-lb-master)
cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from fage@qq.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 100 # 優先級,備服務器設置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認1秒 authentication { auth_type PASS auth_pass 1111 } # 虛擬IP virtual_ipaddress { 192.168.10.168/24 } track_script { check_nginx } } EOF
- vrrp_script:指定檢查nginx工作狀態腳本(根據nginx狀態判斷是否故障轉移)
- virtual_ipaddress:虛擬IP(VIP)
檢查nginx狀態腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF" #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi EOF chmod +x /etc/keepalived/check_nginx.sh
4. keepalived配置文件(k8s-lb-backup)
cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from fage@qq.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.168/24 } track_script { check_nginx } } EOF
上述配置文件中檢查nginx運行狀態腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF" #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi EOF chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根據腳本返回狀態碼(0為工作正常,非0不正常)判斷是否故障轉移。
5. 啟動並設置開機啟動(k8s-lb兩個節點都操作)
systemctl daemon-reload systemctl start nginx && systemctl enable nginx && systemctl status nginx systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived
6. 查看keepalived工作狀態
ip a | grep 192 inet 192.168.10.164/24 brd 192.168.10.255 scope global noprefixroute eth0 inet 192.168.10.168/24 scope global secondary eth0
可以看到,在eth0網卡綁定了192.168.10.168 虛擬IP,說明工作正常。
7. Nginx+Keepalived高可用測試
關閉主節點Nginx,測試VIP是否漂移到備節點服務器。
在Nginx Master執行 pkill nginx
在Nginx Backup,ip addr命令查看已成功綁定VIP。
8. 訪問負載均衡器測試
找K8s集群中任意一個節點,使用curl查看K8s版本測試,使用VIP訪問:
curl -k https://192.168.10.168:6443/version { "major": "1", "minor": "18", "gitVersion": "v1.18.6", "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40", "gitTreeState": "clean", "buildDate": "2020-05-20T12:43:34Z", "goVersion": "go1.13.9", "compiler": "gc", "platform": "linux/amd64" }
可以正確獲取到K8s版本信息,說明負載均衡器搭建正常。該請求數據流程:curl -> vip(nginx) -> apiserver
通過查看Nginx日志也可以看到轉發apiserver IP:
tail -f /var/log/nginx/k8s-access.log
192.168.10.164 192.168.10.160:6443 - [30/May/2020:11:15:10 +0800] 200 422
192.168.10.164 192.168.10.166:6443 - [30/May/2020:11:15:26 +0800] 200 422
到此還沒結束,還有下面最關鍵的一步。
四、修改所有Worker Node連接LB VIP
試想下,雖然我們增加了Master-2和負載均衡器,但是我們是從單Master架構擴容的,也就是說目前所有的Node組件連接都還是Master-1,如果不改為連接VIP走負載均衡器,那么Master還是單點故障。
因此接下來就是要改所有Node組件配置文件,由原來192.168.10.160修改為192.168.10.168(VIP)
角色 |
IP |
k8s-master1 |
192.168.10.160 |
k8s-master2 |
192.168.10.166 |
k8s-node1 |
192.168.10.161 |
k8s-node2 |
192.168.10.162 |
也就是通過kubectl get node命令查看到的節點。
在上述所有Worker Node執行:
sed -i 's#192.168.10.160:6443#192.168.10.168:6443#' /opt/kubernetes/cfg/* systemctl restart kubelet systemctl restart kube-proxy
檢查節點狀態:
kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready <none> 34h v1.18.6 k8s-master2 Ready <none> 18m v1.18.6 k8s-node1 Ready <none> 33h v1.18.6 k8s-node2 Ready <none> 33h v1.18.6
至此,一套完整的 Kubernetes 高可用集群就部署完成了!
PS:如果是在公有雲上,一般都不支持keepalived,那么可以直接用它們的負載均衡器產品(內網就行,還免費~),架構與上面一樣,直接負載均衡多台Master kube-apiserver即可!