Kubesnetes實戰總結 - 部署高可用集群


Kubernetes Master 節點運行組件如下:

  • kube-apiserver: 提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API 注冊和發現等機制
  • kube-scheduler: 負責資源的調度,按照預定的調度策略將 Pod 調度到相應的機器上
  • kube-controller-manager: 負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等
  • etcd: CoreOS 基於 Raft 開發的分布式 key-value 存儲,可用於服務發現、共享配置以及一致性保障(如數據庫選主、分布式鎖等)

高可用:

  • kube-scheduler 和 controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。
  • kube-apiserver 可以運行多個實例,但對其它組件需要提供統一的訪問地址。

我們將利用 HAProxy + Keepalived 配置 kube-apiserver 虛擬 IP 訪問,從而實現高可用和負載均衡,拆解如下:

  • Keepalived 提供 apiserver 對外服務的虛擬 IP(VIP)
  • HAProxy 監聽 Keepalived VIP
  • 運行 Keepalived 和 HAProxy 的節點稱為 LB(負載均衡) 節點
  • Keepalived 是一主多備運行模式,故至少需要兩個 LB 節點
  • Keepalived 在運行過程中周期檢查本機的 HAProxy 進程狀態,如果檢測到 HAProxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用
  • 所有組件(apiserver、scheduler 等)都通過 VIP +HAProxy 監聽的 6444 端口訪問 kube-apiserver  服務
  • apiserver 默認端口為 6443,為了避免沖突我們將 HAProxy 端口設置為 6444,其它組件都是通過該端口統一請求 apiserver

 


1) 基礎安裝:

  • 設置主機名以及域名解析
  • 安裝依賴包以及常用軟件
  • 關閉swap、selinux、firewalld
  • 調整系統內核參數
  • 加載系統ipvs相關模塊
  • 安裝部署docker
  • 安裝部署kubernetes

 以上基礎安裝請查看我的另外一篇博文>>> Kubernetes實戰總結 - kubeadm部署集群(v1.17.4)

 

2) 部署haproxy和keepalived

  1、配置haproxy.cfg,可以根據自己需要調整參數,重點修改最底部MasterIP。

global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
#chroot /usr/share/haproxy
#user haproxy
#group haproxy
daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  50000
    timeout server  50000

frontend stats-front
  bind *:8081
  mode http
  default_backend stats-back

frontend fe_k8s_6444
  bind *:6444
  mode tcp
  timeout client 1h
  log global
  option tcplog
  default_backend be_k8s_6443
  acl is_websocket hdr(Upgrade) -i WebSocket
  acl is_websocket hdr_beg(Host) -i ws

backend stats-back
  mode http
  balance roundrobin
  stats uri /haproxy/stats
  stats auth pxcstats:secret

backend be_k8s_6443
  mode tcp
  timeout queue 1h
  timeout server 1h
  timeout connect 1h
  log global
  balance roundrobin
 server k8s-master01 192.168.17.128:6443
  server k8s-master02 192.168.17.129:6443
  server k8s-master03 192.168.17.130:6443

 

  2、配置啟動腳本,修改MasterIP、虛擬IP、虛擬網卡名稱

#!/bin/bash

# 三台主節點IP地址(修改)
MasterIP1=192.168.17.128
MasterIP2=192.168.17.129
MasterIP3=192.168.17.130
# Kube-apiserver默認端口
MasterPort=6443

# 虛擬網卡IP地址(修改)
VIRTUAL_IP=192.168.17.100
# 虛擬網卡設備名(修改)
INTERFACE=ens33
# 虛擬網卡子網掩碼
NETMASK_BIT=24
# HAProxy服務端口
CHECK_PORT=6444
# 路由標識符
RID=10
# 虛擬路由標識符
VRID=160
# IPV4多播默認地址
MCAST_GROUP=224.0.0.18

echo "docker run haproxy-k8s..."
#CURDIR=$(cd `dirname $0`; pwd)
cp haproxy.cfg /etc/kubernetes/
docker run -dit --restart=always --name haproxy-k8s \
    -p $CHECK_PORT:$CHECK_PORT \
    -e MasterIP1=$MasterIP1 \
    -e MasterIP2=$MasterIP2 \
    -e MasterIP3=$MasterIP3 \
    -e MasterPort=$MasterPort \
    -v /etc/kubernetes/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \
    wise2c/haproxy-k8s
echo

echo "docker run keepalived-k8s..."
docker run -dit --restart=always --name=keepalived-k8s \
    --net=host --cap-add=NET_ADMIN \
    -e INTERFACE=$INTERFACE \
    -e VIRTUAL_IP=$VIRTUAL_IP \
    -e NETMASK_BIT=$NETMASK_BIT \
    -e CHECK_PORT=$CHECK_PORT \
    -e RID=$RID \
    -e VRID=$VRID \
    -e MCAST_GROUP=$MCAST_GROUP \
    wise2c/keepalived-k8s

 

  3、分別上傳到各管理節點,然后執行腳本即可。

[root@k8s-141 ~]# docker ps | grep k8s
d53e93ef4cd4        wise2c/keepalived-k8s            "/usr/bin/keepalived…"   7 hours ago         Up 3 minutes                                 keepalived-k8s
028d18b75c3e        wise2c/haproxy-k8s               "/docker-entrypoint.…"   7 hours ago         Up 3 minutes        0.0.0.0:6444->6444/tcp   haproxy-k8s

 

3) 初始化管理節點

# 獲取初始化配置文件,並修改相關參數,重點增加高可用配置
kubeadm config print init-defaults > kubeadm-config.yaml
vim kubeadm-config.yaml
--- apiVersion: kubeadm.k8s.io
/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: # 主節點IP地址 advertiseAddress: 192.168.17.128 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock # 主節點主機名 name: k8s-128 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 # 證書默認路徑 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes # 高可用VIP地址 controlPlaneEndpoint: "192.168.17.100:6444" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd # 鏡像源 imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration # k8s版本 kubernetesVersion: v1.17.4 networking: dnsDomain: cluster.local # Pod網段(必須與網絡插件一致,且不沖突) podSubnet: 10.11.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- # 開啟IPVS模式 apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs
# 初始化主節點
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

# 根據初始化日志提示,需要將生成的admin.conf拷貝到.kube/config。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

4) 加入其余管理節點、工作節點、部署網絡

# 根據初始化日志提示,執行kubeadm join命令加入集群。
kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:56d53268517... \
   --experimental-control-plane --certificate-key c4d1525b6cce4....

# 根據初始化日志提示,執行kubeadm join命令加入集群。
kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:260796226d…………

# 執行准備好的yaml部署文件
kubectl apply -f kube-flannel.yaml

 

5) 檢查集群健康狀況

[root@k8s-141 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
scheduler            Healthy   ok
[root@k8s-141 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
k8s-132   NotReady   <none>   52m   v1.17.4
k8s-141   Ready      master   75m   v1.17.4
k8s-142   Ready      master   72m   v1.17.4
k8s-143   Ready      master   72m   v1.17.4
[root@k8s-141 ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-2x9lx           1/1     Running   1          28m
coredns-9d85f5447-tl6qh           1/1     Running   1          28m
etcd-k8s-141                      1/1     Running   2          75m
etcd-k8s-142                      1/1     Running   1          72m
etcd-k8s-143                      1/1     Running   1          73m
kube-apiserver-k8s-141            1/1     Running   2          75m
kube-apiserver-k8s-142            1/1     Running   1          71m
kube-apiserver-k8s-143            1/1     Running   1          73m
kube-controller-manager-k8s-141   1/1     Running   3          75m
kube-controller-manager-k8s-142   1/1     Running   2          71m
kube-controller-manager-k8s-143   1/1     Running   2          73m
kube-flannel-ds-amd64-5h7xw       1/1     Running   0          52m
kube-flannel-ds-amd64-dpj5x       1/1     Running   3          58m
kube-flannel-ds-amd64-fnwwl       1/1     Running   1          58m
kube-flannel-ds-amd64-xlfrg       1/1     Running   2          58m
kube-proxy-2nh48                  1/1     Running   1          72m
kube-proxy-dvhps                  1/1     Running   1          73m
kube-proxy-grrmr                  1/1     Running   2          75m
kube-proxy-zjtlt                  1/1     Running   0          52m
kube-scheduler-k8s-141            1/1     Running   3          75m
kube-scheduler-k8s-142            1/1     Running   1          71m
kube-scheduler-k8s-143            1/1     Running   2          73m

 高可用狀態查看

[root@k8s-141 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.17.141:6443          Masq    1      0          0
  -> 192.168.17.142:6443          Masq    1      0          0
  -> 192.168.17.143:6443          Masq    1      1          0
TCP  10.96.0.10:53 rr
  -> 10.11.1.6:53                 Masq    1      0          0
  -> 10.11.2.8:53                 Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.11.1.6:9153               Masq    1      0          0
  -> 10.11.2.8:9153               Masq    1      0          0
UDP  10.96.0.10:53 rr
  -> 10.11.1.6:53                 Masq    1      0          126
  -> 10.11.2.8:53                 Masq    1      0          80
[root@k8s-141 ~]# kubectl -n kube-system get cm kubeadm-config -oyaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 192.168.17.100:6444
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.17.4
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.11.0.0/16
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
  ClusterStatus: |
    apiEndpoints:
      k8s-141:
        advertiseAddress: 192.168.17.141
        bindPort: 6443
      k8s-142:
        advertiseAddress: 192.168.17.142
        bindPort: 6443
      k8s-143:
        advertiseAddress: 192.168.17.143
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2020-04-14T03:01:16Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "824"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: d303c1d1-5664-4fe1-9feb-d3dcc701a1e0
[root@k8s-141 ~]# kubectl -n kube-system exec etcd-k8s-141 -- etcdctl \
> --endpoints=https://192.168.17.141:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key endpoint health
https://192.168.17.141:2379 is healthy: successfully committed proposal: took = 7.50323ms
[root@k8s-141 ~]# ip a |grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.17.141/24 brd 192.168.17.255 scope global noprefixroute dynamic ens33
kubectl logs

 

參考文檔>>> Kubernetes 高可用集群

 

作者:Leozhanggg

出處:https://www.cnblogs.com/leozhanggg/p/12697237.html

本文版權歸作者和博客園共有,歡迎轉載,但未經作者同意必須保留此段聲明,且在文章頁面明顯位置給出原文連接,否則保留追究法律責任的權利。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM