kubeadm部署kubernetes v1.14.1高可用集群


第1章 高可用簡介

kubernetes高可用部署參考:
https://kubernetes.io/docs/setup/independent/high-availability/
https://github.com/kubernetes-sigs/kubespray
https://github.com/wise2c-devops/breeze
https://github.com/cookeem/kubeadm-ha

1.1 拓撲選擇

配置高可用(HA)Kubernetes集群,有以下兩種可選的etcd拓撲:

  • 集群master節點與etcd節點共存,etcd也運行在控制平面節點上
  • 使用外部etcd節點,etcd節點與master在不同節點上運行

1.1.1 堆疊的etcd拓撲

堆疊HA集群是這樣的拓撲,其中etcd提供的分布式數據存儲集群與由kubeamd管理的運行master組件的集群節點堆疊部署。
每個master節點運行kube-apiserver,kube-scheduler和kube-controller-manager的一個實例。kube-apiserver使用負載平衡器暴露給工作節點。
每個master節點創建一個本地etcd成員,該etcd成員僅與本節點kube-apiserver通信。這同樣適用於本地kube-controller-manager 和kube-scheduler實例。
該拓撲將master和etcd成員耦合在相同節點上。比設置具有外部etcd節點的集群更簡單,並且更易於管理復制。
但是,堆疊集群存在耦合失敗的風險。如果一個節點發生故障,則etcd成員和master實例都將丟失,並且冗余會受到影響。您可以通過添加更多master節點來降低此風險。
因此,您應該為HA群集運行至少三個堆疊的master節點。
這是kubeadm中的默認拓撲。使用kubeadm init和kubeadm join --experimental-control-plane命令時,在master節點上自動創建本地etcd成員。

 

1.1.2 外部etcd拓撲

具有外部etcd的HA集群是這樣的拓撲,其中由etcd提供的分布式數據存儲集群部署在運行master組件的節點形成的集群外部。
像堆疊ETCD拓撲結構,在外部ETCD拓撲中的每個master節點運行一個kube-apiserver,kube-scheduler和kube-controller-manager實例。並且kube-apiserver使用負載平衡器暴露給工作節點。但是,etcd成員在不同的主機上運行,每個etcd主機與kube-apiserver每個master節點進行通信。
此拓撲將master節點和etcd成員分離。因此,它提供了HA設置,其中丟失master實例或etcd成員具有較小的影響並且不像堆疊的HA拓撲那樣影響集群冗余。
但是,此拓撲需要兩倍於堆疊HA拓撲的主機數。具有此拓撲的HA群集至少需要三個用於master節點的主機和三個用於etcd節點的主機。

1.2 部署要求

使用kubeadm部署高可用性Kubernetes集群的兩種不同方法:

  • 使用堆疊master節點。這種方法需要較少的基礎設施,etcd成員和master節點位於同一位置。
  • 使用外部etcd集群。這種方法需要更多的基礎設施, master節點和etcd成員是分開的。

在繼續之前,您應該仔細考慮哪種方法最能滿足您的應用程序和環境的需求。

部署要求

  • 至少3個master節點
  • 至少3個worker節點
  • 所有節點網絡全部互通(公共或私有網絡)
  • 所有機器都有sudo權限
  • 從一個設備到系統中所有節點的SSH訪問
  • 所有節點安裝kubeadm和kubelet,kubectl是可選的。
  • 針對外部etcd集群,你需要為etcd成員額外提供3個節點

1.3 負載均衡


部署集群前首選需要為kube-apiserver創建負載均衡器。
注意:負載平衡器有許多中配置方式。可以根據你的集群要求選擇不同的配置方案。在雲環境中,您應將master節點作為負載平衡器TCP轉發的后端。此負載平衡器將流量分配到其目標列表中的所有健康master節點。apiserver的運行狀況檢查是對kube-apiserver偵聽的端口的TCP檢查(默認值:6443)。
負載均衡器必須能夠與apiserver端口上的所有master節點通信。它還必須允許其偵聽端口上的傳入流量。另外確保負載均衡器的地址始終與kubeadm的ControlPlaneEndpoint地址匹配。
haproxy/nignx+keepalived是其中可選的負載均衡方案,針對公有雲環境可以直接使用運營商提供的負載均衡產品。
部署時首先將第一個master節點添加到負載均衡器並使用以下命令測試連接:

# nc -v LOAD_BALANCER_IP PORT

由於apiserver尚未運行,因此預計會出現連接拒絕錯誤。但是,超時意味着負載均衡器無法與master節點通信。如果發生超時,請重新配置負載平衡器以與master節點通信。將剩余的master節點添加到負載平衡器目標組。

第2章 部署集群


本次使用kubeadm部署kubernetes v1.14.1高可用集群,包含3個master節點和1個node節點,部署步驟以官方文檔為基礎,負載均衡部分采用haproxy+keepalived容器方式實現。所有組件版本以kubernetes v1.14.1為准,其他組件以當前最新版本為准。

2.1 基本配置

節點信息:

主機名 IP地址 角色 OS CPU/MEM 磁盤 網卡
k8s-master01 192.168.92.10 master CentOS7.6 2C2G 60G x1
k8s-master02 192.168.92.11 master CentOS7.6 2C2G 60G x1
k8s-master03 192.168.92.12 master CentOS7.6 2C2G 60G x1
k8s-node01 192.168.92.13 node CentOS7.6 2C2G 60G x1
K8S VIP 192.168.92.30 - - - - -


      
 
 

 

 

 

 

以下操作在所有節點執行

#配置主機名
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
#修改/etc/hosts
cat >> /etc/hosts << EOF
192.168.92.10 k8s-master01
192.168.92.11 k8s-master02
192.168.92.12 k8s-master03
192.168.92.13 k8s-node01
EOF

# 開啟firewalld防火牆並允許所有流量
systemctl start firewalld && systemctl enable firewalld
firewall-cmd --set-default-zone=trusted
firewall-cmd --complete-reload
# 關閉selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

#關閉swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak | grep -v swap > /etc/fstab

配置時間同步

使用chrony同步時間,centos7默認已安裝,這里修改時鍾源,所有節點與網絡時鍾源同步:

# 安裝chrony:
yum install -y chrony
cp /etc/chrony.conf{,.bak}
# 注釋默認ntp服務器
sed -i 's/^server/#&/' /etc/chrony.conf
# 指定上游公共 ntp 服務器
cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
EOF

# 設置時區
timedatectl set-timezone Asia/Shanghai
# 重啟chronyd服務並設為開機啟動:
systemctl enable chronyd && systemctl restart chronyd

#驗證,查看當前時間以及存在帶*的行
timedatectl && chronyc sources

加載IPVS模塊

在所有的Kubernetes節點執行以下腳本(若內核大於4.19替換nf_conntrack_ipv4為nf_conntrack):
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#執行腳本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#安裝相關管理工具
yum install ipset ipvsadm -y

配置內核參數

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

 

2.2 安裝docker

CRI安裝參考:https://kubernetes.io/docs/setup/cri/
要在Pod中運行容器,Kubernetes使用容器運行時。以下是可選的容器運行時。

  • Docker
  • CRI-O
  • Containerd
  • Other CRI runtimes: frakti

2.2.1 Cgroup驅動程序簡介

當systemd被選為Linux發行版的init系統時,init進程會生成並使用根控制組(cgroup)並充當cgroup管理器。Systemd與cgroup緊密集成,並將為每個進程分配cgroup。可以配置容器運行時和要使用的kubelet cgroupfs。cgroupfs與systemd一起使用意味着將有兩個不同的cgroup管理器。
Control groups用於約束分配給進程的資源。單個cgroup管理器將簡化正在分配的資源的視圖,並且默認情況下將具有更可靠的可用和使用資源視圖。
當我們有兩個managers時,我們最終會得到兩個這些資源的視圖。我們已經看到了現場的情況,其中配置cgroupfs用於kubelet和Docker systemd 的節點以及在節點上運行的其余進程在資源壓力下變得不穩定。
更改設置,使容器運行時和kubelet systemd用作cgroup驅動程序,從而使系統穩定。請注意native.cgroupdriver=systemd下面Docker設置中的選項。

2.2.2 安裝並配置docker

以下操作在所有節點執行。

# 安裝依賴軟件包
yum install -y yum-utils device-mapper-persistent-data lvm2

# 添加Docker repository,這里改為國內阿里雲yum源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安裝docker-ce
yum update -y && yum install -y docker-ce

## 創建 /etc/docker 目錄
mkdir /etc/docker

# 配置 daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
#注意,由於國內拉取鏡像較慢,配置文件最后追加了阿里雲鏡像加速配置。

mkdir -p /etc/systemd/system/docker.service.d

# 重啟docker服務
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

 

2.3 安裝負載均衡


kubernetes master 節點運行如下組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager


kube-scheduler 和 kube-controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。
kube-apiserver可以運行多個實例,但對其它組件需要提供統一的訪問地址,該地址需要高可用。本次部署使用 keepalived+haproxy 實現 kube-apiserver VIP 高可用和負載均衡。
haproxy+keepalived配置vip,實現了api唯一的訪問地址和負載均衡。keepalived 提供 kube-apiserver 對外服務的 VIP。haproxy 監聽 VIP,后端連接所有 kube-apiserver 實例,提供健康檢查和負載均衡功能。
運行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
本次部署復用 master 節點的三台機器,在所有3個master節點部署haproxy和keepalived組件,以達到更高的可用性,haproxy 監聽的端口(6444) 需要與 kube-apiserver的端口 6443 不同,避免沖突。
keepalived 在運行過程中周期檢查本機的 haproxy 進程狀態,如果檢測到 haproxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP +haproxy 監聽的6444端口訪問 kube-apiserver 服務。
負載均衡架構圖如下:

2.4 運行HA容器


使用的容器鏡像為睿雲智合開源項目breeze相關鏡像,具體使用方法請訪問:
https://github.com/wise2c-devops
其他選擇:haproxy鏡像也可以使用dockerhub官方鏡像,但keepalived未提供官方鏡像,可自行構建或使用dockerhub他人已構建好的鏡像,本次部署全部使用breeze提供的鏡像。
在3個master節點以容器方式部署haproxy,容器暴露6444端口,負載均衡到后端3個apiserver的6443端口,3個節點haproxy配置文件相同。

以下操作在master01節點執行。

2.4.1 創建haproxy啟動腳本

編輯start-haproxy.sh文件,修改Kubernetes Master節點IP地址為實際Kubernetes集群所使用的值(Master Port默認為6443不用修改):

mkdir -p /data/lb
cat > /data/lb/start-haproxy.sh << "EOF"
#!/bin/bash
MasterIP1=192.168.92.10
MasterIP2=192.168.92.11
MasterIP3=192.168.92.12
MasterPort=6443

docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \
-e MasterIP1=$MasterIP1 \
-e MasterIP2=$MasterIP2 \
-e MasterIP3=$MasterIP3 \
-e MasterPort=$MasterPort \
wise2c/haproxy-k8s
EOF

2.4.2 創建keepalived啟動腳本

編輯start-keepalived.sh文件,修改虛擬IP地址VIRTUAL_IP、虛擬網卡設備名INTERFACE、虛擬網卡的子網掩碼NETMASK_BIT、路由標識符RID、虛擬路由標識符VRID的值為實際Kubernetes集群所使用的值。(CHECK_PORT的值6444一般不用修改,它是HAProxy的暴露端口,內部指向Kubernetes Master Server的6443端口)

cat > /data/lb/start-keepalived.sh << "EOF"
#!/bin/bash
VIRTUAL_IP=192.168.92.30
INTERFACE=ens33
NETMASK_BIT=24
CHECK_PORT=6444
RID=10
VRID=160
MCAST_GROUP=224.0.0.18

docker run -itd --restart=always --name=Keepalived-K8S \
--net=host --cap-add=NET_ADMIN \
-e VIRTUAL_IP=$VIRTUAL_IP \
-e INTERFACE=$INTERFACE \
-e CHECK_PORT=$CHECK_PORT \
-e RID=$RID \
-e VRID=$VRID \
-e NETMASK_BIT=$NETMASK_BIT \
-e MCAST_GROUP=$MCAST_GROUP \
wise2c/keepalived-k8s
EOF

  復制啟動腳本到其他2個master節點

[root@k8s-master02 ~]# mkdir -p /data/lb
[root@k8s-master03 ~]# mkdir -p /data/lb
[root@k8s-master01 ~]# scp start-haproxy.sh start-keepalived.sh 192.168.92.11:/data/lb/
[root@k8s-master01 ~]# scp start-haproxy.sh start-keepalived.sh 192.168.92.12:/data/lb/

分別在3個master節點運行腳本啟動haproxy和keepalived容器:

sh /data/lb/start-haproxy.sh && sh /data/lb/start-keepalived.sh

2.4.3 驗證HA狀態

查看容器運行狀態

[root@k8s-master01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1d1901a7201 wise2c/haproxy-k8s "/docker-entrypoint.…" 5 days ago Up 3 hours 0.0.0.0:6444->6444/tcp HAProxy-K8S
2f02a9fde0be wise2c/keepalived-k8s "/usr/bin/keepalived…" 5 days ago Up 3 hours Keepalived-K8S

查看網卡綁定的vip 為192.168.92.30

[root@k8s-master01 ~]# ip a | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.92.10/24 brd 192.168.92.255 scope global noprefixroute ens33
inet 192.168.92.30/24 scope global secondary ens33

查看監聽端口為6444

[root@k8s-master01 ~]# netstat -tnlp | grep 6444 
tcp6 0 0 :::6444 :::* LISTEN 11695/docker-proxy 

keepalived配置文件中配置了vrrp_script腳本,使用nc命令對haproxy監聽的6444端口進行檢測,如果檢測失敗即認定本機haproxy進程異常,將vip漂移到其他節點。

所以無論本機keepalived容器異常或haproxy容器異常都會導致vip漂移到其他節點,可以停掉vip所在節點任意容器進行測試。

[root@k8s-master01 ~]# docker stop HAProxy-K8S 
HAProxy-K8S

#可以看到vip漂移到k8s-master02節點
[root@k8s-master02 ~]# ip a | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.92.11/24 brd 192.168.92.255 scope global noprefixroute ens33
inet 192.168.92.30/24 scope global secondary ens33

也可以在本地執行該nc命令查看結果

[root@k8s-master02 ~]# yum install -y nc
[root@k8s-master02 ~]# nc -v -w 2 -z 127.0.0.1 6444 2>&1 | grep 'Connected to' | grep 6444
Ncat: Connected to 127.0.0.1:6444.

關於haproxy和keepalived配置文件可以在github源文件中參考Dockerfile,或使用docker exec -it xxx sh命令進入容器查看,容器中的具體路徑:

  • /etc/keepalived/keepalived.conf
  • /usr/local/etc/haproxy/haproxy.cfg

負載均衡部分配置完成后即可開始部署kubernetes集群。

2.5 安裝kubeadm

以下操作在所有節點執行。

#由於官方源國內無法訪問,這里使用阿里雲yum源進行替換:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安裝kubeadm、kubelet、kubectl,注意這里默認安裝當前最新版本v1.14.1:
yum install -y kubeadm kubelet kubectl
systemctl enable kubelet && systemctl start kubelet

 

2.6 初始化master節點


初始化參考:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1

創建初始化配置文件
可以使用如下命令生成初始化配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

根據實際部署環境修改信息:

 

[root@k8s-master01 kubernetes]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.92.10
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.92.30:6444"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.1
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs


配置說明:

  • controlPlaneEndpoint:為vip地址和haproxy監聽端口6444
  • imageRepository:由於國內無法訪問google鏡像倉庫k8s.gcr.io,這里指定為阿里雲鏡像倉庫registry.aliyuncs.com/google_containers
  • podSubnet:指定的IP地址段與后續部署的網絡插件相匹配,這里需要部署flannel插件,所以配置為10.244.0.0/16
  • mode: ipvs:最后追加的配置為開啟ipvs模式。

在集群搭建完成后可以使用如下命令查看生效的配置文件:

kubectl -n kube-system get cm kubeadm-config -oyaml

2.7 初始化Master01節點

這里追加tee命令將初始化日志輸出到kubeadm-init.log中以備用(可選)。

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

 

該命令指定了初始化時需要使用的配置文件,其中添加–experimental-upload-certs參數可以在后續執行加入節點時自動分發證書文件。
初始化示例

[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.92.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.92.10 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.92.10 192.168.92.30]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.020444 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
11def25d624a2150b57715e21b0c393695bc6a70d932e472f75d24f747eb657e
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7b232b343577bd5fac312996b9fffb3c88f8f8bb39f46bf865ac9f9f52982b82 \
--experimental-control-plane --certificate-key 11def25d624a2150b57715e21b0c393695bc6a70d932e472f75d24f747eb657e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7b232b343577bd5fac312996b9fffb3c88f8f8bb39f46bf865ac9f9f52982b82

 

kubeadm init主要執行了以下操作:

  • [init]:指定版本進行初始化操作
  • [preflight] :初始化前的檢查和下載所需要的Docker鏡像文件
  • [kubelet-start]:生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,沒有這個文件kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。
  • [certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對應文件。
  • [control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
  • [wait-control-plane]:等待control-plan部署的Master組件啟動。
  • [apiclient]:檢查Master組件服務狀態。
  • [uploadconfig]:更新配置
  • [kubelet]:使用configMap配置kubelet。
  • [patchnode]:更新CNI信息到Node上,通過注釋的方式記錄。
  • [mark-control-plane]:為當前節點打標簽,打了角色Master,和不可調度標簽,這樣默認就不會使用Master節點來運行Pod。
  • [bootstrap-token]:生成token記錄下來,后邊使用kubeadm join往集群中添加節點時會用到
  • [addons]:安裝附加組件CoreDNS和kube-proxy

說明:無論是初始化失敗或者集群已經完全搭建成功,你都可以直接執行kubeadm reset命令清理集群或節點,然后重新執行kubeadm init或kubeadm join相關操作即可。

2.8 配置kubectl命令

無論在master節點或node節點,要能夠執行kubectl命令必須進行以下配置:
root用戶執行以下命令

cat << EOF >> ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source ~/.bashrc

普通用戶執行以下命令(參考init時的輸出結果)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

等集群配置完成后,可以在所有master節點和node節點進行以上配置,以支持kubectl命令。針對node節點復制任意master節點/etc/kubernetes/admin.conf到本地。

查看當前狀態

[root@k8s-master01 ~]# kubectl get nodes 
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 81s v1.14.1
[root@k8s-master01 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-cbrc5 0/1 Pending 0 64s
coredns-8686dcc4fd-wqpwr 0/1 Pending 0 64s
etcd-k8s-master01 1/1 Running 0 16s
kube-apiserver-k8s-master01 1/1 Running 0 13s
kube-controller-manager-k8s-master01 1/1 Running 0 25s
kube-proxy-4vwbb 1/1 Running 0 65s
kube-scheduler-k8s-master01 1/1 Running 0 4s
[root@k8s-master01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok 
controller-manager Healthy ok 
etcd-0 Healthy {"health":"true"} 

由於未安裝網絡插件,coredns處於pending狀態,node處於notready狀態。

2.9 安裝網絡插件

kubernetes支持多種網絡方案,這里簡單介紹常用的flannel和calico安裝方法,選擇其中一種方案進行部署即可。

以下操作在master01節點執行即可。

2.9.1 安裝flannel網絡插件:

由於kube-flannel.yml文件指定的鏡像從coreos鏡像倉庫拉取,可能拉取失敗,可以從dockerhub搜索相關鏡像進行替換,另外可以看到yml文件中定義的網段地址段為10.244.0.0/16。

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
cat kube-flannel.yml | grep image
cat kube-flannel.yml | grep 10.244
sed -i 's#quay.io/coreos/flannel:v0.11.0-amd64#willdockerhub/flannel:v0.11.0-amd64#g' kube-flannel.yml
kubectl apply -f kube-flannel.yml

 再次查看node和 Pod狀態,全部為Running

[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 9m8s v1.14.1
[root@k8s-master01 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-cbrc5 1/1 Running 0 8m53s
coredns-8686dcc4fd-wqpwr 1/1 Running 0 8m53s
etcd-k8s-master01 1/1 Running 0 8m5s
kube-apiserver-k8s-master01 1/1 Running 0 8m2s
kube-controller-manager-k8s-master01 1/1 Running 0 8m14s
kube-flannel-ds-amd64-vtppf 1/1 Running 0 115s
kube-proxy-4vwbb 1/1 Running 0 8m54s
kube-scheduler-k8s-master01 1/1 Running 0 7m53s

2.9.2 安裝calico網絡插件(可選):

安裝參考:https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

kubectl apply -f \
https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

注意該yaml文件中默認CIDR為192.168.0.0/16,需要與初始化時kube-config.yaml中的配置一致,如果不同請下載該yaml修改后運行。

2.10 加入master節點

從初始化輸出或kubeadm-init.log中獲取命令

kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c0a1021e5d63f509a0153724270985cdc22e46dc76e8e7b84d1fbb5e83566ea8 \
--experimental-control-plane --certificate-key 52f64a834454c3043fe7a0940f928611b6970205459fa19cb1193b33a288e7cc

依次將k8s-master02和k8s-master03加入到集群中,示例

[root@k8s-master02 ~]# kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:7b232b343577bd5fac312996b9fffb3c88f8f8bb39f46bf865ac9f9f52982b82 \
> --experimental-control-plane --certificate-key 11def25d624a2150b57715e21b0c393695bc6a70d932e472f75d24f747eb657e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.92.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.92.11 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.92.11 192.168.92.30]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 

2.11 加入node節點


從kubeadm-init.log中獲取命令

kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c0a1021e5d63f509a0153724270985cdc22e46dc76e8e7b84d1fbb5e83566ea8

示例

[root@k8s-node01 ~]# kubeadm join 192.168.92.30:6444 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:7b232b343577bd5fac312996b9fffb3c88f8f8bb39f46bf865ac9f9f52982b82 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.12 驗證集群狀態

 

查看nodes運行情況

[root@k8s-master01 ~]# kubectl get nodes -o wide 
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready master 10h v1.14.1 192.168.92.10 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5
k8s-master02 Ready master 10h v1.14.1 192.168.92.11 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5
k8s-master03 Ready master 10h v1.14.1 192.168.92.12 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5
k8s-node01 Ready <none> 10h v1.14.1 192.168.92.13 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5

查看pod運行情況

[root@k8s-master03 ~]# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-8686dcc4fd-6ttgv 1/1 Running 1 22m 10.244.2.3 k8s-master03 <none> <none>
coredns-8686dcc4fd-dzvsx 1/1 Running 0 22m 10.244.3.3 k8s-node01 <none> <none>
etcd-k8s-master01 1/1 Running 1 6m23s 192.168.92.10 k8s-master01 <none> <none>
etcd-k8s-master02 1/1 Running 0 37m 192.168.92.11 k8s-master02 <none> <none>
etcd-k8s-master03 1/1 Running 1 36m 192.168.92.12 k8s-master03 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 1 48m 192.168.92.10 k8s-master01 <none> <none>
kube-apiserver-k8s-master02 1/1 Running 0 37m 192.168.92.11 k8s-master02 <none> <none>
kube-apiserver-k8s-master03 1/1 Running 2 36m 192.168.92.12 k8s-master03 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 2 48m 192.168.92.10 k8s-master01 <none> <none>
kube-controller-manager-k8s-master02 1/1 Running 1 37m 192.168.92.11 k8s-master02 <none> <none>
kube-controller-manager-k8s-master03 1/1 Running 1 35m 192.168.92.12 k8s-master03 <none> <none>
kube-flannel-ds-amd64-d86ct 1/1 Running 0 37m 192.168.92.11 k8s-master02 <none> <none>
kube-flannel-ds-amd64-l8clz 1/1 Running 0 36m 192.168.92.13 k8s-node01 <none> <none>
kube-flannel-ds-amd64-vtppf 1/1 Running 1 42m 192.168.92.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-zg4z5 1/1 Running 1 37m 192.168.92.12 k8s-master03 <none> <none>
kube-proxy-4vwbb 1/1 Running 1 49m 192.168.92.10 k8s-master01 <none> <none>
kube-proxy-gnk2v 1/1 Running 0 37m 192.168.92.11 k8s-master02 <none> <none>
kube-proxy-kqm87 1/1 Running 0 36m 192.168.92.13 k8s-node01 <none> <none>
kube-proxy-n5mdh 1/1 Running 2 37m 192.168.92.12 k8s-master03 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 2 48m 192.168.92.10 k8s-master01 <none> <none>
kube-scheduler-k8s-master02 1/1 Running 1 37m 192.168.92.11 k8s-master02 <none> <none>
kube-scheduler-k8s-master03 1/1 Running 2 36m 192.168.92.12 k8s-master03 <none> <none>

 



查看service

[root@k8s-master03 ~]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 51m

2.13 驗證IPVS

查看kube-proxy日志,第一行輸出Using ipvs Proxier.

[root@k8s-master01 ~]# kubectl -n kube-system logs -f kube-proxy-4vwbb 
I0426 16:05:03.156092 1 server_others.go:177] Using ipvs Proxier.
W0426 16:05:03.156501 1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0426 16:05:03.156788 1 server.go:555] Version: v1.14.1
I0426 16:05:03.166269 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0426 16:05:03.169022 1 config.go:202] Starting service config controller
I0426 16:05:03.169103 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0426 16:05:03.169182 1 config.go:102] Starting endpoints config controller
I0426 16:05:03.169200 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0426 16:05:03.269760 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0426 16:05:03.270123 1 controller_utils.go:1034] Caches are synced for service config controller
I0426 16:05:03.352400 1 graceful_termination.go:160] Trying to delete rs: 10.96.0.1:443/TCP/192.168.92.11:6443
I0426 16:05:03.352478 1 graceful_termination.go:174] Deleting rs: 10.96.0.1:443/TCP/192.168.92.11:6443
......

查看代理規則

[root@k8s-master01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.92.10:6443 Masq 1 3 0 
-> 192.168.92.11:6443 Masq 1 0 0 
-> 192.168.92.12:6443 Masq 1 0 0 
TCP 10.96.0.10:53 rr
-> 10.244.0.5:53 Masq 1 0 0 
-> 10.244.0.6:53 Masq 1 0 0 
TCP 10.96.0.10:9153 rr
-> 10.244.0.5:9153 Masq 1 0 0 
-> 10.244.0.6:9153 Masq 1 0 0 
UDP 10.96.0.10:53 rr
-> 10.244.0.5:53 Masq 1 0 0 
-> 10.244.0.6:53 Masq 1 0 0 

 

2.14 etcd集群

執行以下命令查看etcd集群狀態

kubectl -n kube-system exec etcd-k8s-master01 -- etcdctl \
--endpoints=https://192.168.92.10:2379 \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--key-file=/etc/kubernetes/pki/etcd/server.key cluster-health

示例

[root@k8s-master01 ~]# kubectl -n kube-system exec etcd-k8s-master01 -- etcdctl \
> --endpoints=https://192.168.92.10:2379 \
> --ca-file=/etc/kubernetes/pki/etcd/ca.crt \
> --cert-file=/etc/kubernetes/pki/etcd/server.crt \
> --key-file=/etc/kubernetes/pki/etcd/server.key cluster-health
member a94c223ced298a9 is healthy: got healthy result from https://192.168.92.12:2379
member 1db71d0384327b96 is healthy: got healthy result from https://192.168.92.11:2379
member e86955402ac20700 is healthy: got healthy result from https://192.168.92.10:2379
cluster is healthy

 

2.15 驗證HA

在master01上執行關機操作,建議提前在其他節點配置kubectl命令支持。

[root@k8s-master01 ~]# shutdown -h now

在任意運行節點驗證集群狀態,master01節點NotReady,集群可正常訪問:

[root@k8s-master02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 19m v1.14.1
k8s-master02 Ready master 11m v1.14.1
k8s-master03 Ready master 10m v1.14.1
k8s-node01 Ready <none> 9m21s v1.14.1

查看網卡,vip自動漂移到master03節點

[root@k8s-master03 ~]# ip a |grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.92.12/24 brd 192.168.92.255 scope global noprefixroute ens33
inet 192.168.92.30/24 scope global secondary ens33

 

————————————————
版權聲明:本文為CSDN博主「willblog」的原創文章,遵循 CC 4.0 BY-SA 版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:

https://www.cnblogs.com/sandshell/p/11570458.html#auto_id_0

https://blog.csdn.net/networken/article/details/89599004


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM