k8s HA 補充-(keepalived+haproxy配置)


本部署文章參照了 https://github.com/opsnull/follow-me-install-kubernetes-cluster ,歡迎給作者star

本文檔講解使用 keepalived 和 haproxy 實現 kube-apiserver 高可用的步驟:

  • keepalived 提供 kube-apiserver 對外服務的 VIP;
  • haproxy 監聽 VIP,后端連接所有 kube-apiserver 實例,提供健康檢查和負載均衡功能;

運行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備運行模式,故至少兩個 LB 節點。

本文檔復用 master 節點的三台機器,haproxy 監聽的端口(8443) 需要與 kube-apiserver 的端口 6443 不同,避免沖突。

keepalived 在運行過程中周期檢查本機的 haproxy 進程狀態,如果檢測到 haproxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。

所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和 haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。

 

1.安裝軟件包

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "yum install -y keepalived haproxy" done
復制代碼

 

2.配置和下發 haproxy 配置文件

haproxy 配置文件:

復制代碼
cat > haproxy.cfg <<EOF global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /var/run/haproxy-admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults log global timeout connect 5000 timeout client 10m timeout server 10m listen admin_stats bind 0.0.0.0:10080 mode http log 127.0.0.1 local0 err stats refresh 30s stats uri /status stats realm welcome login\ Haproxy stats auth admin:123456 stats hide-version stats admin if TRUE listen kube-master bind 0.0.0.0:8443 mode tcp option tcplog balance source server 192.168.161.150 192.168.161.150:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.161.151 192.168.161.151:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.161.152 192.168.161.152:6443 check inter 2000 fall 2 rise 2 weight 1 EOF
復制代碼
  • haproxy 在 10080 端口輸出 status 信息;
  • haproxy 監聽所有接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致;
  • server 字段列出所有 kube-apiserver 監聽的 IP 和端口;

分發 haproxy.cfg 到所有 master 節點:

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" scp haproxy.cfg root@${node_ip}:/etc/haproxy done
復制代碼

 

3.啟動 haproxy 服務

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl enable haproxy && systemctl restart haproxy" done
復制代碼

 

4.檢查 haproxy 服務狀態

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status haproxy|grep Active" done
復制代碼

確保狀態為 active (running),否則查看日志,確認原因:

journalctl -u haproxy

 

檢查 haproxy 是否監聽 8443 端口:

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "netstat -lnpt|grep haproxy" done
復制代碼

確保輸出類似於:

>>> 192.168.161.150
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 7181/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 7181/haproxy
>>> 192.168.161.151
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 16475/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 16475/haproxy
>>> 192.168.161.152
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 7212/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 7212/haproxy

 

配置和下發 keepalived 配置文件

keepalived 是一主(master)多備(backup)運行模式,故有兩種類型的配置文件。master 配置文件只有一份,backup 配置文件視節點數目而定,對於本文檔而言,規划如下:

  • master: 192.168.161.150
  • backup:192.168.161.151、192.168.161.152

master 配置文件:

復制代碼
source /opt/k8s/bin/environment.sh
cat  > keepalived-master.conf <<EOF global_defs { router_id lb-master-105 } vrrp_script check-haproxy { script "killall -0 haproxy" interval 5 weight -30 } vrrp_instance VI-kube-master { state MASTER priority 120 dont_track_primary interface ${VIP_IF} virtual_router_id 68 advert_int 3 track_script { check-haproxy } virtual_ipaddress { ${MASTER_VIP} } } EOF
復制代碼
  • VIP 所在的接口(interface ${VIP_IF})為 eno16777736
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套 keepalived HA,則必須各不相同;

backup 配置文件:

復制代碼
source /opt/k8s/bin/environment.sh
cat  > keepalived-backup.conf <<EOF global_defs { router_id lb-backup-105 } vrrp_script check-haproxy { script "killall -0 haproxy" interval 5 weight -30 } vrrp_instance VI-kube-master { state BACKUP priority 110 dont_track_primary interface ${VIP_IF} virtual_router_id 68 advert_int 3 track_script { check-haproxy } virtual_ipaddress { ${MASTER_VIP} } } EOF
復制代碼
  • VIP 所在的接口(interface ${VIP_IF})為 eno16777736
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套 keepalived HA,則必須各不相同;
  • priority 的值必須小於 master 的值;

下發 keepalived 配置文件

下發 master 配置文件:

scp keepalived-master.conf root@192.168.161.150:/etc/keepalived/keepalived.conf

下發 backup 配置文件:

scp keepalived-backup.conf root@192.168.161.151:/etc/keepalived/keepalived.conf scp keepalived-backup.conf root@192.168.161.152:/etc/keepalived/keepalived.conf

 

啟動 keepalived 服務

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl enable keepalived &&systemctl restart keepalived" 
done
復制代碼

 

檢查 keepalived 服務

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status keepalived|grep Active" done
復制代碼

確保狀態為 active (running),否則查看日志,確認原因:

journalctl -u keepalived

查看 VIP 所在的節點,確保可以 ping 通 VIP:

復制代碼
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" ssh ${node_ip} "/usr/sbin/ip addr show ${VIP_IF}" ssh ${node_ip} "ping -c 1 ${MASTER_VIP}" done
復制代碼

 

 

查看 haproxy 狀態頁面

瀏覽器訪問 ${MASTER_VIP}:10080/status 地址,查看 haproxy 狀態頁面:

這里配置的VIP為:192.168.161.160

 

配置的用戶名密碼為:admin 123456

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM