參考文檔:
- 部署kubernetes集群1:https://github.com/opsnull/follow-me-install-kubernetes-cluster
- 部署kubernetes集群2:https://blog.frognew.com/2017/04/install-ha-kubernetes-1.6-cluster.html
- Building High-Availability Clusters:https://kubernetes.io/docs/admin/high-availability/building/
- 高可用原理:http://blog.csdn.net/horsefoot/article/details/52247277
- TLS bootstrapping:https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/
- TLS bootstrap引導程序:https://jimmysong.io/posts/kubernetes-tls-bootstrapping/
- kubernetes:https://github.com/kubernetes/kubernetes
- etcd:https://github.com/coreos/etcd
- flanneld:https://github.com/coreos/flannel
- cfssl:https://github.com/cloudflare/cfssl
- Manage TLS Certificates in a Cluster:https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
一.環境
1. 組件
組件 |
版本 |
Remark |
centos |
7.4 |
|
kubernetes |
v1.9.2 |
|
etcd |
v3.3.0 |
|
flanneld |
v0.10.0 |
vxlan網絡 |
docker |
1.12.6 |
已提前部署 |
cfssl |
1.2.0 |
ca證書與秘鑰 |
2. 拓撲(邏輯)
- kube-master含有服務組件:kube-apiserver,kube-controller-manager,kube-scheduler;
- kube-node含有服務組件:kubelet,kube-proxy;
- 數據庫采用etcd集群方式;
- 為節省資源,kube-master,kube-node,etcd等融合部署;
- 前端采用haproxy+keepalived做高可用。
3. 整體規划
Host |
Role |
IP |
Service |
Remark |
kubenode1 |
kube-master kube-node etcd-node |
172.30.200.21 |
kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy etcd |
|
kubenode2 |
kube-master kube-node etcd-node |
172.30.200.22 |
kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy etcd |
|
kubenode3 |
kube-master kube-node etcd-node |
172.30.200.23 |
kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy etcd |
|
ipvs01 |
ha |
172.30.200.11 vip:172.30.200.10 |
haproxy keepalived |
|
ipvs02 |
ha |
172.30.200.12 vip:172.30.200.10 |
haproxy keepalived |
二.部署前端HA
以ipvs01為例,ipvs02做適當調整。
1. 配置haproxy
# 具體安裝配置可參考調整:http://www.cnblogs.com/netonline/p/7593762.html # 這里給出frontend與backend的部分配置,如下:
#frontend, 名字自定義 frontend kube-api-https_frontend #定義前端監聽端口, 建議采用bind *:80的形式,否則做集群高可用的時候有問題,vip切換到其余機器就不能訪問; #根據kube-apiserver的端口配置 bind 0.0.0.0:6443 #只做端口的映射轉發
mode tcp #將請求轉交到default_backend組處理. default_backend kube-api-https_backend
#backend后端配置 backend kube-api-https_backend #定義負載均衡方式為roundrobin方式, 即基於權重進行輪詢調度的算法, 在服務器性能分布較均勻情況下推薦. balance roundrobin mode tcp #基於源地址實現持久連接. stick-table type ip size 200k expire 30m stick on src #后端服務器定義, maxconn 1024表示該服務器的最大連接數, cookie 1表示serverid為1, weight代表權重(默認1,最大為265,0則表示不參與負載均衡), #check inter 1500是檢測心跳頻率, rise 2是2次正確認為服務器可用, fall 3是3次失敗認為服務器不可用. server kubenode1 172.30.200.21:6443 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3 server kubenode2 172.30.200.22:6443 maxconn 1024 cookie 2 weight 3 check inter 1500 rise 2 fall 3 server kubenode3 172.30.200.23:6443 maxconn 1024 cookie 3 weight 3 check inter 1500 rise 2 fall 3
2. 配置keepalived
keepalived.conf
# 具體安裝配置可參考調整:http://www.cnblogs.com/netonline/p/7598744.html # 這里給出配置,如下: [root@ipvs01 ]# cat /usr/local/keepalived/etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost.local } notification_email_from root@localhost.local smtp_server 172.30.200.11 smtp_connect_timeout 30 router_id HAproxy_DEVEL } vrrp_script chk_haproxy { script "/usr/local/keepalived/etc/chk_haproxy.sh" interval 2 weight 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 101 advert_int 1 nopreempt authentication { auth_type PASS auth_pass 987654 } virtual_ipaddress { 172.30.200.10 } track_script { chk_haproxy } }
心跳檢測
[root@ipvs01 ]# cat /usr/local/keepalived/etc/chk_haproxy.sh #!/bin/bash # 2018-02-05 v0.1 if [ $(ps -C haproxy --no-header | wc -l) -ne 0 ]; then exit 0 else # /etc/rc.d/init.d/keepalived restart exit 1 fi
3. 設置iptables
# 放開部分端口,這里kube-apiserver映射到haproxy的tcp 6443端口會用到,其余端口可不打開 [root@ipvs01 ~]# vim /etc/sysconfig/iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 514 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 1080 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT # vrrp通告采用組播協議 -A INPUT -m pkttype --pkt-type multicast -j ACCEPT [root@ipvs01 ~]# service iptables restart
三.設置集群環境變量
以kubenode1為例,kubenode2&kubenode3做適當小調整。
1. 集群環境變量
# 以下環境變量在配置各模塊開機啟動項時會大量使用,環境變量更方便調用; # 放置在/etc/profile.d/下,可開機即載入環境變量 [root@kubenode1 ~]# cd /etc/profile.d/ [root@kubenode1 profile.d]# touch kubernetes_variable.sh [root@kubenode1 profile.d]# vim kubernetes_variable.sh
# 服務網段 (Service CIDR),部署前路由不可達,部署后集群內使用IP:Port可達;
# 建議使用主機未采用的網段定義服務網段與Pod網段 export SERVICE_CIDR="169.169.0.0/16" # POD 網段 (Cluster CIDR),部署前路由不可達,部署后路由可達(flanneld保證) export CLUSTER_CIDR="10.254.0.0/16" # 服務端口范圍 (NodePort Range) export NODE_PORT_RANGE="10000-60000" # etcd 集群服務地址列表 export ETCD_ENDPOINTS="https://172.30.200.21:2379,https://172.30.200.22:2379,https://172.30.200.23:2379" # flanneld 在etcd中的網絡配置前綴 export FLANNEL_ETCD_PREFIX="/kubernetes/network" # kubernetes 服務 IP (一般是 SERVICE_CIDR 中第一個IP) export CLUSTER_KUBERNETES_SVC_IP="169.169.0.1" # 集群 DNS 服務 IP (從 SERVICE_CIDR 中預分配) export CLUSTER_DNS_SVC_IP="169.169.0.11" # 集群 DNS 域名, 注意最后的"." export CLUSTER_DNS_DOMAIN="cluster.local." # TLS Bootstrapping 使用的 Token,可以使用命令生成:head -c 16 /dev/urandom | od -An -t x | tr -d ' ' export BOOTSTRAP_TOKEN="962283d223c76bd7b6f806936de64a23"
2. kubernetes相關服務可執行文件路徑環境變量
# 這里將相關的服務均放置在相應的/usr/loca/xxx目錄下; # 設置環境變量后開機即可載入 [root@kubenode1 ~]# cd /etc/profile.d/ [root@kubenode1 profile.d]# touch kubernetes_path.sh [root@kubenode1 profile.d]# vim kubernetes_path.sh export PATH=$PATH:/usr/local/cfssl:/usr/local/etcd:/usr/local/kubernetes/bin:/usr/local/flannel # 重新載入/etc/prifile,或重啟服務器,環境變量生效 [root@kubenode1 profile.d]# source /etc/profile