Kubernetes高可用集群部署
部署架構:

Master 組件:
- kube-apiserver
Kubernetes API,集群的統一入口,各組件協調者,以HTTP API提供接口服務,所有對象資源的增刪改查和監聽操作都交給APIServer處理后再提交給Etcd存儲。
- kube-controller-manager
處理集群中常規后台任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。
- kube-scheduler
根據調度算法為新創建的Pod選擇一個Node節點。
Node 組件:
- kubelet
kubelet是Master在Node節點上的Agent,管理本機運行容器的生命周期,比如創建容器、Pod掛載數據卷、
下載secret、獲取容器和節點狀態等工作。kubelet將每個Pod轉換成一組容器。
- kube-proxy
在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工作。
- docker
運行容器。
第三方服務:
- etcd
分布式鍵值存儲系統。用於保持集群狀態,比如Pod、Service等對象信息。
下圖清晰表明了Kubernetes的架構設計以及組件之間的通信協議。

一、環境規划
| 角色 |
IP |
組件 |
| K8S-MASTER01 |
10.247.74.48 |
kube-apiserver kubelet flannel Nginx keepalived |
| K8S-MASTER02 |
10.247.74.49 |
kube-apiserver kubelet flannel Nginx keepalived |
| K8S-MASTER03 |
10.247.74.50 |
kube-apiserver kubelet flannel Nginx keepalived |
| K8S-NODE01 |
10.247.74.53 |
kubelet |
| K8S-NODE02 |
10.247.74.54 |
kubelet |
| K8S-NODE03 |
10.247.74.55 |
kubelet |
| K8S-NODE04 |
10.247.74.56 |
kubelet |
| K8S-VIP |
10.247.74.51 |
軟件版本信息
| 軟件 |
版本 |
| Linux操作系統 |
Red Hat Enterprise 7.6_x64 |
| Kubernetes |
1.14.1 |
| Docker |
18.06.3-ce |
| Etcd |
3.0 |
| Nginx |
17.0 |
1.1系統環境准備(所有節點)
#設置主機名及關閉selinux,swap分區,時鍾同步。
cat <<EOF >>/etc/hosts 10.247.74.48 TWDSCPA203V 10.247.74.49 TWDSCPA204V 10.247.74.50 TWDSCPA205V 10.247.74.53 TWDSCPA206V 10.247.74.54 TWDSCPA207V 10.247.74.55 TWDSCPA208V 10.247.74.56 TWDSCPA209V 10.247.74.51 K8S-VIP EOF sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config swapoff -a sed -i 's/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g' /etc/fstab yum install ntp -y systemctl enable ntpd systemctl start ntpd ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 關閉firewalld(如果開啟的話參考:https://www.cnblogs.com/Dev0ps/p/11401530.html)
systemctl stop firewalld systemctl disable firewalld
#設置內核參數
echo "* soft nofile 32768" >> /etc/security/limits.conf echo "* hard nofile 65535" >> /etc/security/limits.conf echo "* soft nproc 32768" >> /etc/security/limits.conf echo "* hadr nproc 65535" >> /etc/security/limits.conf cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappisysctl --system ness=0 EOF sysctl --system sysctl -p
#加載IPVS模塊
在所有的Kubernetes節點執行以下腳本(若內核大於4.19替換nf_conntrack_ipv4為nf_conntrack):
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
#執行腳本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#安裝ipvs相關管理軟件
yum install ipset ipvsadm -y reboot
1.2安裝Docker(所有節點)
# Step 1: 安裝必要的一些系統工具
yum install -y yum-utils device-mapper-persistent-data lvm2
#sSep 2:安裝ddocker
yum update -y && yum install -y docker-ce-18.06.3.ce
# Step 3: 配置docker倉庫及鏡像存放路徑
mkdir -p /mnt/sscp/data/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [ "https://registry.docker-cn.com"],
"insecure-registries":["172.31.182.143"],
"graph": "//mnt/sscp/data/docker"
}
EOF
# Step 4: 重啟啟Docker服務
systemctl restart docker systemctl enable docker
1.3部署Nginx
一、安裝依賴包
yum install -y gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
二、從官網下載安裝包
wget https://nginx.org/download/nginx-1.16.0.tar.gz
三、解壓並安裝
tar zxvf nginx-1.16.0.tar.gz
cd nginx-1.16.0
./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module--with-stream --with-stream_ssl_module
make && make install
四、配置kube-apiserver反向代理
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.247.74.48:6443;
server 10.247.74.49:6443;
server 10.247.74.50:6443;
}
server {
listen 0.0.0.0:8443;
proxy_pass k8s-apiserver;
}
}
五、啟動nginx服務
/usr/local/sbin/nginx
1.4部署keepalived
一、下載地址:
wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz
二、解壓並安裝
tar xf keepalived-2.0.16.tar.gz
cd keepalived-2.0.16
./configure --prefix=/usr/local/keepalived
make && make install
cp /root/keepalived-2.0.16/keepalived/etc/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
二、添加配置文件
vim /etc/keepalived/keepalived.conf
MASTER:
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.247.74.51/24
}
BACKUP:
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.247.74.51/24
}
1.5部署kubeadm(所有節點)
#由於官方源國內無法訪問,這里使用阿里雲yum源進行替換:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
#安裝kubeadm、kubelet、kubectl,注意這里安裝版本v1.14.1:
yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1 systemctl enable kubelet && systemctl start kubelet
1.6部署Master節點
初始化參考: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1
創建初始化配置文件 可以使用如下命令生成初始化配置文件
kubeadm config print init-defaults > kubeadm-config.yaml
根據實際部署環境修改信息:
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.247.74.48
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: cn-hongkong.i-j6caps6av1mtyxyofmrw
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.247.74.51:8443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
配置說明:
controlPlaneEndpoint:為vip地址和haproxy監聽端口6444 imageRepository:由於國內無法訪問google鏡像倉庫k8s.gcr.io,這里指定為阿里雲鏡像倉庫registry.aliyuncs.com/google_containers podSubnet:指定的IP地址段與后續部署的網絡插件相匹配,這里需要部署flannel插件,所以配置為10.244.0.0/16 mode: ipvs:最后追加的配置為開啟ipvs模式。
在集群搭建完成后可以使用如下命令查看生效的配置文件:
kubeadm config images pull --config kubeadm-config.yaml # 通過阿里源預先拉鏡像
初始化Master01節點
這里追加tee命令將初始化日志輸出到kubeadm-init.log中以備用(可選)。
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
該命令指定了初始化時需要使用的配置文件,其中添加–experimental-upload-certs參數可以在后續執行加入節點時自動分發證書文件。
1.7添加其他Master節點
執行以下命令:
kubeadm join 10.247.74.51:8443 --token ocb5tz.pv252zn76rl4l3f6 \ --discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2 \ --experimental-control-plane --certificate-key 20366c9cdbfdc1435a6f6d616d988d027f2785e34e2df9383f784cf61bab9826
添加上下文:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
1.8添加Node節點
執行以下命令:
kubeadm join 10.247.74.51:8443 --token ocb5tz.pv252zn76rl4l3f6 \
--discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2
1.9部署flannel並查詢集群狀態
一、部署flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
二、查看集群狀態
# kubectl get node

1.10后續
token默認24h后失效如果有新的node加入可在master上重新生成:
kubeadm token create --print-join-command
