kubernetes kubeadm部署高可用集群


k8s kubeadm部署高可用集群


kubeadm是官方推出的部署工具,旨在降低kubernetes使用門檻與提高集群部署的便捷性. 同時越來越多的官方文檔,圍繞kubernetes容器化部署為環境, 所以容器化部署kubernetes已成為趨勢.
本文主要內容: 基於kubeadm部署方式,實現kubernetes的高可用.

master部署

  1. 三台master節點上建立etcd集群
  2. 使用vip 進行kubeadm初始化master

1. 環境准備

節點 地址
master1,etcd1 10.8.104.16
master2,etcd2 10.8.37.18
master3,etcd3 10.8.125.29
node1 10.8.113.73

操作系統: centos 7..2
vip: 10.8.78.31/16

2. 部署etcd集群

三台master節點上部署etcd分布式集群, 部署細節請自行百度.
etcd集群信息 http://10.8.125.29:2379,http://10.8.104.16:2379,http://10.8.37.18:2379

3. 編譯rpm包

yum install docker git -y
systemctl start docker
cd /data
git clone https://github.com/kubernetes/release.git
cd /data/release/rpm
./docker-build.sh

4. 安裝kubeadm

cd /data/release/rpm/output/x86_64
yum localinstall *.rpm -y
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

5. 初始化master1

#添加vip
ip addr add 10.8.78.31/16 dev eth0 
kubeadm init --api-advertise-addresses=10.8.78.31 --external-etcd-endpoints=http://10.8.125.29:2379,http://10.8.104.16:2379,http://10.8.37.18:2379

--api-advertise-addresses 支持多個ip,但是會導致kubeadm join無法正常加入, 所以對外服務只配置為一個vip

6. 部署其他master

  1. 參照master1 安裝kubeadm
  2. 拷貝master1 的/etc/kubernetes/並啟動kubelet
scp -r 10.8.104.16:/etc/kubernetes/* /etc/kubernetes/
yum install docker -y
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

kube-controller-manager ``kube-scheduler 通過 --leader-elect實現了分布式鎖. 所以三個master節點可以正常運行.


組件優化

采用daemonsets方式,實現核心組件實現高可用,

1. dns組件

方案一

#1. 在所有master部署dns
kubectl scale deploy/kube-dns  --replicas=3 -n kube-system

方案二

#1.刪除自帶dns組件
kubectl delete deploy/kube-dns svc/kube-dns -n kube-system
#2.下載最新的dns組件
cd /data
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-controller.yaml.base
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kubedns-svc.yaml.base
#3.修改配置
mv kubedns-controller.yaml.base kubedns-daemonsets.yaml
mv kubedns-svc.yaml.base kubedns-svc.yaml
sed -i 's/__PILLAR__DNS__SERVER__/10.96.0.10/g' kubedns-svc.yaml
sed -i 's/__PILLAR__DNS__DOMAIN__/cluster.local/g' kubedns-daemonsets.yaml

把Deployment類型改為DaemonSet,並加上master nodeSelector

      nodeSelector:
        kubeadm.alpha.kubernetes.io/role: master
kubectl apply -f kubedns-svc.yaml -f kubedns-daemonsets.yaml

2. 網絡組件

基於穩定性與兼容性考慮, 采用Canal作為網絡組件

wget https://raw.githubusercontent.com/tigera/canal/master/k8s-install/kubeadm/canal.yaml
#1.刪掉canal.yaml中關於etcd的部署代碼
#2.修改`etcd_endpoints`為已部署的etcd集群`
kubectl apply -f canal.yaml
etcd_endpoints: "http://10.8.125.29:2379,http://10.8.104.16:2379,http://10.8.37.18:2379"

canal啟動完畢后, dns組件會處於正常狀態

3. kube-discovery

kube-discovery 主要負責集群密鑰的分發,如果這個組件不正常, 將無法正常新增節點kubeadm join

方案一

kubectl scale deploy/kube-discovery --replicas=3 -n kube-system

方案二

#1. 導出kube-discovery配置
kubectl get deploy/kube-discovery -n kube-system -o yaml > /data/kube-discovery.yaml
#2. 把Deployment類型改為DaemonSet,並加上master nodeSelector
#3. 刪掉自帶kube-discovery
kubectl delete deploy/kube-discovery svc/kube-dns -n kube-system
#4. 部署kube-discovery
kubectl apply -f kube-discovery.yaml

Deployment轉為DaemonSet, 如果報錯,請根據報錯內容刪減配置. 主要是去掉狀態配置與replicasstrategy

4. label node

給所有master節點打上role=master標簽, 以使DaemonSet類型的組件自動部署到所有master節點

kubectl label node 10-8-125-29 kubeadm.alpha.kubernetes.io/role=master
kubectl label node 10-8-37-18 kubeadm.alpha.kubernetes.io/role=master

vip 漂移

到目前為止,三個master節點 相互獨立運行,互補干擾. kube-apiserver作為核心入口, 可以使用keepalived 實現高可用, kubeadm join暫時不支持負載均衡的方式

1. keepalived

yum install -y keepalived

/etc/keepalived/keepalived.conf

global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://10.8.104.16:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 61
    priority 115
    advert_int 1
    mcast_src_ip 10.8.104.16
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        #10.8.104.16
        10.8.37.18
        10.8.125.29
    }
    virtual_ipaddress {
        10.8.78.31/16
    }
    track_script {
        CheckK8sMaster
    }

}
systemctl enable keepalived
systemctl restart keepalived

keepalived模式為 主-從-從, 拷貝配置到其他master節點,並做修改:

  1. curl -k https://10.8.104.16:6443 檢查本機kube-apiserver是否正常運行
  2. state MASTER 另外兩個節點為 state BACKUP
  3. priority 115 逐次降低優先級,
  4. 修改相應的 ip
  5. systemctl enable keepalived;systemctl restart keepalived

驗證

1. 加入節點

cd /data/release/rpm/output/x86_64
yum localinstall *.rpm -y
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
kubeadm join --token=eb6a6d.d3e65ed6e64a5bc6 10.8.78.31
kubectl get node
NAME          STATUS         AGE
10-8-104-16   Ready,master   9h
10-8-113-73   Ready          8h
10-8-125-29   Ready,master   9h
10-8-37-18    Ready,master   9h

2. 驗證master宕機影響

#查看當前vip所在的節點
ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:bf:a6:d4 brd ff:ff:ff:ff:ff:ff
    inet 10.8.37.18/16 brd 10.8.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.8.78.31/16 scope global secondary eth0
       valid_lft forever preferred_lft forever

修改節點dns服務器
/etc/resolv.conf

search default.svc.cluster.local svc.cluster.local cluster.local
options timeout:1 attempts:1 ndots:5
nameserver 10.96.0.10
nameserver 10.8.255.1
nameserver 10.8.255.2
nameserver 114.114.114.114

開三個node節點的命令窗口,分別執行以下命令.

#驗證vip漂移的網絡影響
ping 10.8.78.31
#驗證kube-apiserver故障影響
while true; do  sleep 1; curl -k https://10.8.78.31:6443; done
#驗證dns解析影響
while true; do  sleep 1; nslookup kubernetes.default.svc.cluster.local; done

關閉master 10.8.37.18機器

64 bytes from 10.8.78.31: icmp_seq=61 ttl=64 time=0.192 ms
From 10.8.104.16 icmp_seq=62 Time to live exceeded
64 bytes from 10.8.78.31: icmp_seq=64 ttl=64 time=0.164 ms
64 bytes from 10.8.78.31: icmp_seq=65 ttl=64 time=0.139 ms
Unauthorized
curl: (7) Failed connect to 10.8.78.31:6443; No route to host
curl: (7) Failed connect to 10.8.78.31:6443; No route to host
Unauthorized
Unauthorized
** server can't find kubernetes.default.svc.cluster.local: NXDOMAIN

Server:         10.8.255.1
Address:        10.8.255.1#53

** server can't find kubernetes.default.svc.cluster.local: NXDOMAIN

Server:         10.96.0.10
Address:        10.96.0.10#53

粗略估算, 影響kube-apiserver為5秒, 影響dns解析服務為10秒

[root@10-8-104-16 data]#kubectl get node
NAME          STATUS            AGE
10-8-104-16   Ready,master      9h
10-8-113-73   Ready             9h
10-8-125-29   Ready,master      9h
10-8-37-18    NotReady,master   9h
[root@10-8-104-16 data]# kubectl get all -n kube-system
NAME                                     READY     STATUS     RESTARTS   AGE
po/calico-policy-controller-fxjzw        1/1       Running    0          4h
po/canal-node-2jcz7                      3/3       Running    3          9h
po/canal-node-3gnk3                      3/3       Running    3          9h
po/canal-node-5s2br                      3/3       Running    0          9h
po/canal-node-l1c9w                      3/3       NodeLost   6          9h
po/dummy-2088944543-7hmh5                1/1       Running    0          3h
po/kube-apiserver-10-8-104-16            1/1       Running    3          3h
po/kube-apiserver-10-8-125-29            1/1       Running    2          4h
po/kube-apiserver-10-8-37-18             1/1       Unknown    4          3h
po/kube-controller-manager-10-8-104-16   1/1       Running    6          3h
po/kube-controller-manager-10-8-125-29   1/1       Running    6          4h
po/kube-controller-manager-10-8-37-18    1/1       Unknown    5          3h
po/kube-discovery-4w20c                  1/1       NodeLost   2          8h
po/kube-discovery-4wcrw                  1/1       Running    1          8h
po/kube-discovery-tnfs4                  1/1       Running    1          8h
po/kube-dns-8pf48                        4/4       Running    4          9h
po/kube-dns-cq4m5                        4/4       NodeLost   8          9h
po/kube-dns-w8nq1                        4/4       Running    4          9h
po/kube-proxy-4bpt5                      1/1       Running    1          9h
po/kube-proxy-blxhl                      1/1       Running    0          9h
po/kube-proxy-dc9dz                      1/1       NodeLost   2          9h
po/kube-proxy-z3q0n                      1/1       Running    1          9h
po/kube-scheduler-10-8-104-16            1/1       Running    8          3h
po/kube-scheduler-10-8-125-29            1/1       Running    7          4h
po/kube-scheduler-10-8-37-18             1/1       Unknown    7          3h

NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
svc/kube-dns   10.96.0.10   <none>        53/UDP,53/TCP   9h

NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            9h

NAME                          DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller   1         1         1         9h
rs/dummy-2088944543           1         1         1         9h

以下為參考配置

Alt text
Alt text
Alt text
Alt text
Alt text


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM