K8S 使用Kubeadm搭建高可用Kubernetes(K8S)集群 - 證書有效期100年


多Master節點的K8S集群滿足高可用性要求適用於生產環境。

目錄

0.選擇部署拓撲

Kubenetes集群的控制平面節點(即Master節點)由數據庫服務(Etcd)+其他組件服務(Apiserver、Controller-manager、Scheduler...)組成。
整個集群系統運行的交互數據都將存儲到數據庫服務(Etcd)中,所以Kubernetes集群的高可用性取決於數據庫服務(Etcd)在多個控制平面(Master)節點構建的數據同步復制關系。
由此搭建Kubernetes的高可用集群可以選擇以下兩種部署方式:

  • 使用堆疊的控制平面(Master)節點,其中etcd與組成控制平面的其他組件在同台機器上;
  • 使用外部Etcd節點,其中Etcd與控制平台的其他組件在不同的機器上。

參考文檔:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

0.1.堆疊式Etcd拓撲(選擇)

Etcd與其他組件共同運行在多台控制平面(Master)機器上,構建Etcd集群關系以形成高可用的Kubernetes集群。
先決條件:

  • 最少三個或更多奇數Master節點;
  • 最少三個或更多Node節點;
  • 集群中所有機器之間的完整網絡連接(公共或專用網絡);
  • 使用超級用戶權限;
  • 在集群中的任何一個節點上都可以使用SSH遠程訪問;
  • Kubeadm和Kubelet已經安裝到機器上。

使用這種方案可以減少要使用機器的數量,降低成本,降低部署復雜度;多組件服務之間競爭主機資源,可能導致性能瓶頸,以及當Master主機發生故障時影響到所有組件正常工作。
在實際應用中,你可以選擇部署更多數量>3的Master主機,則該拓撲的劣勢將會減弱!
這是kubeadm中的默認拓撲,kubeadm會在Master節點上自動創建本地etcd成員。
image.png

0.2.外部Etcd拓撲

控制平面的Etcd組件運行在外部主機上,其他組件連接到外部的Etcd集群以形成高可用的Kubernetes集群。
先決條件:

  • 最少三個或更多奇數Master主機;
  • 最少三個或更多Node主機;
  • 還需要三台或更多奇數Etcd主機。
  • 集群中所有主機之間的完整網絡連接(公共或專用網絡);
  • 使用超級用戶權限;
  • 在集群中的任何一個節點主機上都可以使用SSH遠程訪問;
  • Kubeadm和Kubelet已經安裝到機器上。

使用外部主機搭建起來的Etcd集群,擁有更多的主機資源和可擴展性,以及故障影響范圍縮小,但更多的機器將導致增加部署成本。
image.png

1.主機規划

主機系統:CentOS Linux release 7.6.1810 (Core)
Kubernetes版本:Kubernetes-1.23.0
Kubernetes與Docker兼容性:v20.10.7+不兼容 -> v20.10.12+不兼容
Docker版本:Docker-ce-19.03.0
硬件條件:集群中的機器最少需要2GB或者以上的內存,最少需要2核或者以上更多的CPU

主機名 主機地址 主機角色 運行服務
k8s-master01 192.168.124.128
VIP:192.168.124.100
control plane node(master) kube-apiserver
etcd
kube-scheduler
kube-controller-manager
docker
kubelet
---
keepalived
ipvs
k8s-master02 192.168.124.130
VIP:192.168.124.100
control plane node(master) kube-apiserver
etcd
kube-scheduler
kube-controller-manager
docker
kubelet
---
keepalived
ipvs
k8s-master03 192.168.124.131
VIP:192.168.124.100
control plane node(master) kube-apiserver
etcd
kube-scheduler
kube-controller-manager
docker
kubelet
---
keepalived
ipvs
k8s-node01 192.168.124.132
worker node(node kubelet
kube-proxy
docker

2.檢查和配置主機環境

2.1.驗證每個主機上的MAC地址和Product_id的唯一性

所有主機上:

[root@localhost ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:40:e3:9f brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# cat /sys/class/dmi/id/product_uuid
B70F4D56-1F69-3997-AD55-83725A40E39F

2.2.檢查運行Kubernetes所需的端口是否未被占用

角色 協議 方向 服務:端口范圍
Master(Control Plane) TCP Inbound Kubernetes API server:6443
etcd server client API:2379-2380
Kubelet API:10250
kube-scheduler:10259
kube-controller-manager:10257
Node(Worker Node) TCP Inbound Kubelet API:10250
NodePort Services†:30000-32767

所有master主機上:

[root@localhost ~]# ss -alnupt |grep -E '6443|10250|10259|10257|2379|2380'

所有node主機上:

[root@localhost ~]# ss -alnupt |grep -E '10250|3[0-2][0-7][0-6][0-7]'

2.3.配置主機名稱

k8s-master01:

[root@localhost ~]# echo "k8s-master01" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-master01 ~]# 

k8s-master02:

[root@localhost ~]# echo "k8s-master02" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-master02 ~]# 

k8s-master03:

[root@localhost ~]# echo "k8s-master03" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-master03 ~]# 

k8s-node01:

[root@localhost ~]# echo "k8s-node01" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-node01 ~]# 

2.4.添加hosts名稱解析

所有主機上:

[root@k8s-master01 ~]# cat >> /etc/hosts << EOF
192.168.124.128 k8s-master01
192.168.124.130 k8s-master02
192.168.124.131 k8s-master03
192.168.124.132 k8s-node01
EOF

2.5.配置時間同步

k8s-master01:
設置優先從cn.ntp.org.cn公共時間服務器上同步時間。

# 安裝NTP時間服務和NTP客戶端
[root@k8s-master01 ~]# yum -y install epel-release.noarch
[root@k8s-master01 ~]# yum -y install ntp ntpdate
# 使用NTP客戶端從外部公共NTP時間服務器同步本機時間
[root@k8s-master01 ~]# ntpdate cn.ntp.org.cn
# 配置NTP時間服務
[root@k8s-master01 ~]# vim /etc/ntp.conf
# 訪問控制
# 允許外部客戶端從本機同步時間,但不允許外部客戶端修改本機時間
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

# 從外部服務器主動同步時間
# 如果外部服務器連接失敗時則以本機時間為准
server 127.127.1.0
Fudge 127.127.1.0 stratum 10

server cn.ntp.org.cn prefer iburst minpoll 4 maxpoll 10
server ntp.aliyun.com iburst minpoll 4 maxpoll 10
server ntp.tuna.tsinghua.edu.cn iburst minpoll 4 maxpoll 10
server time.ustc.edu.cn iburst minpoll 4 maxpoll 10
# 啟動NTP時間服務並設置服務開機自啟
[root@k8s-master01 ~]# systemctl start ntpd
[root@k8s-master01 ~]# systemctl enable ntpd
[root@k8s-master01 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 02:59:43 EDT; 4min 52s ago
  Process: 27106 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
[root@k8s-master01 ~]# ntpstat
synchronised to NTP server (120.25.108.11) at stratum 3
   time correct to within 70 ms
   polling server every 16 s

其他主機均優從k8s-master01主機上同步時間:

# 安裝NTP時間服務和NTP客戶端
[root@k8s-master02 ~]# yum -y install epel-release.noarch
[root@k8s-master02 ~]# yum -y install ntp ntpdate
# 使用NTP客戶端從NTP時間服務器同步本機時間
[root@k8s-master02 ~]# ntpdate 192.168.124.128
# 配置NTP時間服務
[root@k8s-master02 ~]# vim /etc/ntp.conf
# 設置從剛剛搭建的NTP時間服務器主動同步時間
# 如果NTP時間服務器連接失敗時則以本機時間為准
server 127.127.1.0
Fudge 127.127.1.0 stratum 10

server 192.168.124.128 prefer iburst minpoll 4 maxpoll 10
# 啟動NTP時間服務並設置服務開機自啟
[root@k8s-master02 ~]# systemctl start ntpd
[root@k8s-master02 ~]# systemctl enable ntpd
[root@k8s-master02 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 02:59:43 EDT; 4min 52s ago
  Process: 27106 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
[root@k8s-master02 ~]# ntpstat
synchronised to NTP server (192.168.124.128) at stratum 3
   time correct to within 70 ms
   polling server every 16 s

2.6.關閉SWAP

SWAP可能導致容器出現性能下降問題。
所有主機上:

[root@k8s-master01 ~]# swapoff -a  # 臨時關閉
[root@k8s-master01 ~]# free -mh
              total        used        free      shared  buff/cache   available
Mem:           1.8G        133M        1.4G        9.5M        216M        1.5G
Swap:            0B          0B          0B
[root@k8s-master01 ~]# vim /etc/fstab  # 永久關閉
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.7.關閉Firewalld

Kubernetes中的kube-proxy組件需要利用IPtables或者IPVS創建Service對象,CentOS7默認使用Firewalld防火牆服務,為了避免沖突,所以需要禁用和關閉它。
所有主機上:

[root@k8s-master01 ~]# systemctl stop firewalld
[root@k8s-master01 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

2.8.關閉SeLinux

所有主機上:

[root@k8s-master01 ~]# setenforce 0  # 臨時關閉
[root@k8s-master01 ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux  # 永久關閉

2.9.啟用bridge-nf功能

開啟IPtables的網橋透明工作模式,即二層的流量也會受到IPtables規則影響。
如果該功能模塊開機沒有加載,則需要加載"br_netfilter"模塊。
所有主機上:

[root@k8s-master01 ~]# modprobe br_netfilter
[root@k8s-master01 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
[root@k8s-master01 ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master01 ~]# sysctl --system

2.10.安裝並啟用IPVS

kube-proxy組件支持三種工作模式轉發流量到Pod:userspace、iptables、ipvs。
如果想要使用ipvs模式則需要安裝IPVS。
所有主機上:

[root@k8s-master01 ~]# yum -y install kernel-devel
[root@k8s-master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@k8s-master01 ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master01 ~]# lsmod |grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@k8s-master01 ~]# yum -y install ipset ipvsadm

3.安裝容器運行平台-Docker

容器運行平台用於承載和管理運行容器應用。

3.1.安裝指定版本的Docker

所有主機上:

[root@k8s-master01 ~]# yum -y install epel-release.noarch yum-utils
[root@k8s-master01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# yum -y install device-mapper-persistent-data  lvm2
[root@k8s-master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@k8s-master01 ~]# yum -y install docker-ce-19.03.0
[root@k8s-master01 ~]# systemctl start docker
[root@k8s-master01 ~]# systemctl enable docker

3.2.配置Docker和國內鏡像加速

配置Docker在線鏡像源為國內鏡像源,官方推薦使用的cgroup驅動為"systemd"。
所有主機上:

[root@k8s-master01 ~]# cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": [
        "https://7mimmp7p.mirror.aliyuncs.com",
        "https://registry.docker-cn.com",
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn"
        ],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
[root@k8s-master01 ~]# systemctl restart docker
[root@k8s-master01 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 06:26:38 EDT; 4s ago
[root@k8s-master01 ~]# docker info | grep Cgroup
 Cgroup Driver: systemd

4.安裝kubeadm、kubelet、kubectl

kubeadm,引導構建集群所使用的工具。
kubelet,在集群中所有機器上要運行的組件,用於管理Pod和容器。
kubectl,在命令行操作和使用集群的客戶端工具。

4.1.YUM安裝kubeadm、kubelet、kubectl

YUM-Kubernetes存儲庫由阿里雲提供。
在所有主機上:

[root@k8s-master01 ~]# cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# yum install -y kubelet-1.23.0 kubectl-1.23.0 kubeadm-1.23.0 --disableexcludes=kubernetes --nogpgcheck
[root@k8s-master01 ~]# systemctl enable kubelet
[root@k8s-master01 ~]# cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF

4.2.源碼安裝kubeadm,防止Kubernetes證書過期(修改為100年有效期)

在Kubernetes中,客戶端與APIServer通信需要使用X509證書,各組件之間也是使用證書進行身份驗證的,由於官方默認使用kubeadm創建的相關證書有效期只有一年,如果證書到期后可能導致集群不可用,這非常嚴重。
所以我們這里對kubernetes源碼進行修改后編譯生成后的kubeadm初始化控制平面節點,在初始化的過程中會生成有效期為其100年的的Kubernetes證書!
注:YUM安裝kubeadm重命名為kubeadm-yum,保留以用於后面配置kubelet,由於使用源碼編譯的kubeadm無法正確的配置YUM安裝kubelet。
所有節點主機上:

[root@k8s-master01 ~]# which kubeadm
/usr/bin/kubeadm
[root@k8s-master01 ~]# mv /usr/bin/kubeadm /usr/bin/kubeadm-yum

k8s-master01-安裝GO:

[root@k8s-master01 ~]# wget https://go.dev/dl/go1.17.8.linux-amd64.tar.gz
[root@k8s-master01 ~]# tar xzvf go1.17.8.linux-amd64.tar.gz -C /usr/local
[root@k8s-master01 ~]# vim /etc/profile
export PATH=$PATH:/usr/local/go/bin
export GO111MODULE=auto
export GOPROXY=https://goproxy.cn
[root@k8s-master01 ~]# source /etc/profile
[root@k8s-master01 ~]# go version
go version go1.17.8 linux/amd64

k8s-master01-從GITHUB克隆官方代碼:

[root@k8s-master01 ~]# yum -y install git
[root@k8s-master01 ~]# git clone https://github.91chi.fun/https://github.com/kubernetes/kubernetes.git
[root@k8s-master01 ~]# cd kubernetes
[root@k8s-master01 kubernetes]# git tag -l
...
v1.23.0
...
[root@k8s-master01 kubernetes]# git checkout -b v1.23.0 v1.23.0

k8s-master01-修改證書有效期相關代碼:

[root@k8s-master01 kubernetes]# vim cmd/kubeadm/app/constants/constants.go
const (
...
        // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
        // CertificateValidity = time.Hour * 24 * 365
        CertificateValidity = time.Hour * 24 * 365 * 100
...
}
[root@k8s-master01 kubernetes]# vim staging/src/k8s.io/client-go/util/cert/cert.go
...
// NewSelfSignedCACert creates a CA certificate
func NewSelfSignedCACert(cfg Config, key crypto.Signer) (*x509.Certificate, error) {
        now := time.Now()
        tmpl := x509.Certificate{
                SerialNumber: new(big.Int).SetInt64(0),
                Subject: pkix.Name{
                        CommonName:   cfg.CommonName,
                        Organization: cfg.Organization,
                },
                DNSNames:              []string{cfg.CommonName},
                NotBefore:             now.UTC(),
                //NotAfter:              now.Add(duration365d * 10).UTC(),
                NotAfter:              now.Add(duration365d * 100).UTC(),
                KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
                BasicConstraintsValid: true,
                IsCA:                  true,
        }
}
...

k8s-master01-編譯生成新的kubeadm命令,這將會輸出到_output/bin/目錄下:

[root@k8s-master01 kubernetes]# make WHAT=cmd/kubeadm GOFLAGS=-v

k8s-master01-拷貝kubeadm到所有節點主機的/usr/bin目錄下:

[root@k8s-master01 kubernetes]# cd _output/bin/ && cp -rf kubeadm /usr/bin/kubeadm
[root@k8s-master01 bin]# scp kubeadm root@k8s-master02:/usr/bin/kubeadm
[root@k8s-master01 bin]# scp kubeadm root@k8s-master03:/usr/bin/kubeadm
[root@k8s-master01 bin]# scp kubeadm root@k8s-node01:/usr/bin/kubeadm

5.創建負載均衡器-HAProxy+Keepalived

參考文檔:https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing
當存在多個控制平面時,kube-apiserver也存在多個,可以使用Nginx+Keepalived、HAProxy+Keepalived等工具實現多個kube-apiserver的負載均衡和高可用。
推薦使用HAProxy+Keepalived這個組合,因為HAProxy可以提高更高性能的四層負載均衡功能,這也是大多數人的選擇。
image.png

5.1.安裝HAProxy、Keepalived

HAProxy可以實現對后端APIServer的負載均衡與健康檢查,不會轉發請求到不可用的APIServer,以避免失敗的請求。

[root@k8s-master01 ~]# yum -y install haproxy keepalived

5.2.配置並啟動HAProxy

啟動的HAProxy服務由於后端的api-server還沒有部署運行,需要等待Kubernetes初始化完成才可以正常接受處理請求!
在所有master主機上:

[root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2 emerg info

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:9443
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    mode tcp
    balance     roundrobin
    server k8s-master01 192.168.124.128:6443 check
    server k8s-master02 192.168.124.130:6443 check
    server k8s-master03 192.168.124.131:6443 check
[root@k8s-master01 ~]# haproxy -f /etc/haproxy/haproxy.cfg -c
Configuration file is valid
[root@k8s-master01 ~]# systemctl start haproxy
[root@k8s-master01 ~]# systemctl enable haproxy
[root@k8s-master01 ~]# netstat -lnupt |grep 9443
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      44965/haproxy  
[root@k8s-master01 ~]# curl localhost:9443
curl: (52) Empty reply from server

5.3.配置並重啟rsyslog

HAProxy采用rsyslog記錄日志,日志有助於后續我們觀察和分析問題。
所有master主機上:

[root@k8s-master01 ~]# vim /etc/rsyslog.conf
local2.*                       /var/log/haproxy.log
[root@k8s-master01 ~]# systemctl restart rsyslog
[root@k8s-master01 ~]# systemctl status rsyslog
● rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2022-04-12 16:39:50 CST; 2h 11min ago

5.4.配置Keeaplived

配置Keepalived以實現HAProxy的高可用性,當A主負載均衡器不可用時,還有B、C備用負載均衡器繼續提供服務。
配置基於腳本(vrrp_script)的健康檢查,當檢查失敗時,權重-2,即優先級-2,這時候就會發生主備切換。
k8s-master01(MASTER):

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id 192.168.124.128
}
vrrp_script check_haproxy {
  script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
  interval 3
  weight -2
  fall 3
  rise 3
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 50
    priority 100
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.124.100
    }
    track_script {
        check_haproxy
    }
}

k8s-master02(BACKUP):

[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id 192.168.124.130
}
vrrp_script check_haproxy {
  script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
  interval 3
  weight -2
  fall 3
  rise 3
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 50
    priority 99
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.124.100
    }
    track_script {
        check_haproxy
    }
}

k8s-master03(BACKUP):

[root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id 192.168.124.131
}
vrrp_script check_haproxy {
  script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
  interval 3
  weight -2
  fall 3
  rise 3
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 50
    priority 98
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.124.100
    }
    track_script {
        check_haproxy
    }
}

5.5.啟動並設置Keepalived開機自啟

所有master主機上:

[root@k8s-master01 ~]# systemctl start keepalived
[root@k8s-master01 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 11:48:06 EDT; 4min 0s ago
 Main PID: 48653 (keepalived)
[root@k8s-master01 ~]# systemctl enable keepalived

5.6.查看VIP是否在MASTER主機上

[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:40:e3:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.128/24 brd 192.168.124.255 scope global dynamic ens33
       valid_lft 1057sec preferred_lft 1057sec
    inet 192.168.124.100/32 scope global ens33
       valid_lft forever preferred_lft forever

5.7.測試:主故障時自動切換至備

主故障自動切換:
停止MASTER主機上的HAProxy服務,這個時候檢查腳本觸發優先級-2,則就會發生主備切換,VIP則會漂移到另外一台優先級較低的BACKUP主機上以代替成為新的MASTER。
以下可以看出VIP已經漂移到了k8s-master02上。

[root@k8s-master01 ~]# systemctl stop haproxy
[root@k8s-master01 ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:40:e3:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.128/24 brd 192.168.124.255 scope global dynamic ens33
       valid_lft 1451sec preferred_lft 1451sec
[root@k8s-master02 ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c4:65:67 brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.130/24 brd 192.168.124.255 scope global dynamic ens33
       valid_lft 1320sec preferred_lft 1320sec
    inet 192.168.124.100/32 scope global ens33
       valid_lft forever preferred_lft forever

主故障后恢復:
當MASTER主機上的HAProxy服務恢復時,這個時候檢查腳本觸發優先級+2,也會發生切換,VIP會漂移到優先級更高的已恢復正常的MASTER主機繼續作為MASTER提供服務。

[root@k8s-master01 ~]# systemctl start haproxy
[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:40:e3:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.128/24 brd 192.168.124.255 scope global dynamic ens33
       valid_lft 1175sec preferred_lft 1175sec
    inet 192.168.124.100/32 scope global ens33
       valid_lft forever preferred_lft forever

6.部署並構建Kubernetes集群

6.1.准備鏡像

可以使用以下命令查看kubeadm-v1.23.0部署kubernetes-v1.23.0所需要的鏡像列表以及默認所使用的的鏡像來源。
所有主機上:

[root@k8s-master01 ~]# kubeadm config print init-defaults |grep imageRepository
imageRepository: k8s.gcr.io
[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version 1.23.0
k8s.gcr.io/kube-apiserver:v1.23.0
k8s.gcr.io/kube-controller-manager:v1.23.0
k8s.gcr.io/kube-scheduler:v1.23.0
k8s.gcr.io/kube-proxy:v1.23.0
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

由於訪問k8s.gcr.io可能需要FQ,所以我們可以在國內的鏡像倉庫中下載它們(比如使用阿里雲鏡像倉庫。阿里雲代理鏡像倉庫地址:registry.aliyuncs.com/google_containers
如果你需要在更多台主機上使用它們,則可以考慮使用Harbor或Docker Register搭建私有化鏡像倉庫。
所有主機上-從鏡像倉庫中拉取鏡像:

[root@k8s-master01 ~]# kubeadm config images pull --kubernetes-version=v1.23.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

所有主機上-查看本地鏡像列表:

[root@k8s-master01 ~]# docker images |grep 'registry.aliyuncs.com/google_containers'
registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.0   e6bf5ddd4098 4 months ago  
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.0   37c6aeb3663b 4 months ago  
registry.aliyuncs.com/google_containers/kube-proxy                v1.23.0   e03484a90585 4 months ago  
registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.0   56c5af1d00b5 4 months ago  
registry.aliyuncs.com/google_containers/etcd                      3.5.1-0   25f8c7f3da61 5 months ago  
registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7 6 months ago  
registry.aliyuncs.com/google_containers/pause                     3.6       6270bb605e12 7 months ag

6.2.准備kubeadm-init.yaml清單文件

kubeadm相關配置可以參考文檔:https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/
k8s-master01:

[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@k8s-master01 ~]# vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: "0" # 設置引導令牌的永不過期
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.124.128 # 綁定APIServer要監聽的本機IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01 # 節點名稱
  taints: null
---
controlPlaneEndpoint: "192.168.124.100:9443" # 控制平面入口點地址:"負載均衡器VIP或DNS:負載均衡器端口"
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 設置部署集群時要使用的鏡像倉庫地址
kind: ClusterConfiguration
kubernetesVersion: 1.23.0 # 設置要部署的kubernetes版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12  # 設置Service分配子網(CIDR)
  podSubnet: 10.244.0.0/16  # 設置Pod分配子網(CIDR)
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # 設置kube-proxy工作模式為ipvs

6.3.基於kubeadm-init.yaml使用kubeadm創建一個初始化控制平面節點

kubeadm在初始化控制平面時會生成部署Kubernetes集群中各個組件所需的相關配置文件在/etc/kubernetes目錄下,可以供我們參考。
注:由於源碼生成的kubeadm在初始化節點的時候無法正確的配置YUM安裝的kubelet服務,所以需要YUM安裝的kubeadm-yum先配置kubelet服務!
初始化完成后的提示信息和后續執行命令需要保存一下!
k8s-master01:
使用YUM安裝的kubeadm初始配置kubelet:

[root@k8s-master01 ~]# kubeadm-yum init phase kubelet-start --config kubeadm-init.yaml

使用源碼編譯的kubeadm初始化控制平面節點:

[root@k8s-master01 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs
Your Kubernetes control-plane has initialized successfully!
你的Kubernetes控制平面已初始化成功!

To start using your cluster, you need to run the following as a regular user:
你的集群是啟動狀態,如果你是普通用戶的話請繼續執行以下命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
或者,如果你是root用戶的話請繼續執行以下命令:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
你還需要在集群上部署一個Pod網絡插件!

You can now join any number of the control-plane node running the following command on each as root:
你如果需要更多數量的控制平面節點加入到集群的話,請使用root用戶在節點執行以下命令:

  kubeadm join 192.168.124.100:9443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:64c918139d7d344b64b0720244077b60ea10f5572717f92113c08fe9c56be3c9 \
	--control-plane --certificate-key 5d87ca735c040ba6b04de388f2857530bbd9de094cbd43810904afe9a6aec50d

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
證書的訪問密鑰有效期只有2小時,如果你還需要部署更多的節點,請先執行"kubeadm init phase upload-certs --upload-certs"重新上傳證書到kubeadm-certs!

Then you can join any number of worker nodes by running the following on each as root:
你如果需要更多數量的工作節點加入到集群的話,請使用root用戶在節點上執行以下命令:

kubeadm join 192.168.124.100:9443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:64c918139d7d344b64b0720244077b60ea10f5572717f92113c08fe9c56be3c9 

6.4.將正確的kubelet服務配置文件拷貝到其他主機

k8s-master01:

[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
[root@k8s-master01 ~]# scp -r /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf root@k8s-master02:/usr/lib/systemd/system/kubelet.service.d
[root@k8s-master01 ~]# scp -r /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf root@k8s-master03:/usr/lib/systemd/system/kubelet.service.d
[root@k8s-master01 ~]# scp -r /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf root@k8s-node01:/usr/lib/systemd/system/kubelet.service.d

其他主機上重載服務配置:

[root@k8s-master02 ~]# systemctl daemon-reload

6.5.其他節點加入到集群

其他控制平面節點加入到集群:

[root@k8s-master02 ~]# kubeadm join 192.168.124.100:9443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:64c918139d7d344b64b0720244077b60ea10f5572717f92113c08fe9c56be3c9 --control-plane --certificate-key 5d87ca735c040ba6b04de388f2857530bbd9de094cbd43810904afe9a6aec50d

工作節點加入到集群:

[root@k8s-node01 ~]# kubeadm join 192.168.124.100:9443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:64c918139d7d344b64b0720244077b60ea10f5572717f92113c08fe9c56be3c9 

6.6.觀察Etcd

可見Etcd是以集群的方式運行的!
在任意Master節點上:

[root@k8s-master03 ~]# ps aux |grep etcd
root       1971  5.4  4.5 11283128 84128 ?      Ssl  16:33   1:00 etcd --advertise-client-urls=https://192.168.124.131:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.124.131:2380 --initial-cluster=k8s-master03=https://192.168.124.131:2380,k8s-master01=https://192.168.124.128:2380,k8s-master02=https://192.168.124.130:2380 --initial-cluster-state=existing --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.124.131:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.124.131:2380 --name=k8s-master03 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

6.7.查看Kubernetes證書有效期

可以看到使用修改源碼后的kubeadm部署的集群證書有效期為100年。
Kubernetes證書通常存放在"/etc/kubernetes/pki"目錄下。
在任意Master節點上:

[root@k8s-master01 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 18, 2122 04:38 UTC   99y                                     no      
apiserver                  Mar 18, 2122 04:38 UTC   99y             ca                      no      
apiserver-etcd-client      Mar 18, 2122 04:38 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Mar 18, 2122 04:38 UTC   99y             ca                      no      
controller-manager.conf    Mar 18, 2122 04:38 UTC   99y                                     no      
etcd-healthcheck-client    Mar 18, 2122 04:38 UTC   99y             etcd-ca                 no      
etcd-peer                  Mar 18, 2122 04:38 UTC   99y             etcd-ca                 no      
etcd-server                Mar 18, 2122 04:38 UTC   99y             etcd-ca                 no      
front-proxy-client         Mar 18, 2122 04:38 UTC   99y             front-proxy-ca          no      
scheduler.conf             Mar 18, 2122 04:38 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 18, 2122 04:38 UTC   99y             no      
etcd-ca                 Mar 18, 2122 04:38 UTC   99y             no      
front-proxy-ca          Mar 18, 2122 04:38 UTC   99y             no  

6.8.設置kubectl客戶端以連接到集群

節點在部署完成時,會生成用於kubectl登錄所使用的kubeconfig配置文件在"/etc/kubernetes/admin.conf"!
所有master主機上:

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.9.查看集群中節點列表

可見所有節點都是"NotReady"未就緒狀態,這需要在集群中安裝Pod網絡插件,集群才可以正常開始工作!

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE    VERSION
k8s-master01   NotReady   control-plane,master   145m   v1.23.0
k8s-master02   NotReady   control-plane,master   144m   v1.23.0
k8s-master03   NotReady   control-plane,master   143m   v1.23.0
k8s-node01     NotReady   <none>                 76m    v1.23.0
[root@k8s-master01 ~]# kubectl describe nodes k8s-master01
Name:               k8s-master01
...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                      
KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

7.安裝核心插件-Pod網絡插件-Calico

Calico是一個開源的虛擬化網絡方案,支持基礎的Pod網絡通信和網絡策略功能。
Kubernetes有一種資源類型"NetworkPolicy",用於描述Pod的網絡策略,要想使用該資源類型,則需要Pod網絡插件支持網絡策略功能。
任意一台master主機上:

7.1.配置NetworkManager

如果主機系統使用NetworkManager來管理網絡的話,則需要配置NetworkManager,以允許Calico管理接口。
NetworkManger操作默認網絡命名空間接口的路由表,這可能會干擾Calico代理正確路由的能力。
在所有主機上操作:

[root@k8s-master01 ~]# cat > /etc/NetworkManager/conf.d/calico.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
EOF

7.2.下載calico.yaml

[root@k8s-master01 ~]# wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate

7.3.修改calico.yaml

由於默認的Calico清單文件中所使用的鏡像來源於docker.io國外鏡像源,上面我們配置了Docker鏡像加速,應刪除docker.io前綴以使鏡像從國內鏡像加速站點下載。

[root@k8s-master01 ~]# cat calico.yaml |grep 'image:'
          image: docker.io/calico/cni:v3.23.0
          image: docker.io/calico/cni:v3.23.0
          image: docker.io/calico/node:v3.23.0
          image: docker.io/calico/kube-controllers:v3.23.0
[root@k8s-master01 ~]# sed -i 's#docker.io/##g' calico.yaml
[root@k8s-master01 ~]# cat calico.yaml |grep 'image:'
          image: calico/cni:v3.23.0
          image: calico/cni:v3.23.0
          image: calico/node:v3.23.0
          image: calico/kube-controllers:v3.23.0

7.4.應用calico.yaml

[root@k8s-master01 ~]# kubectl apply -f calico.yaml

Pod-Calico在"kube-system"名稱空間下創建並運行起來:

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-77d9858799-c267f   1/1     Running   0              92s
calico-node-6jw5q                          1/1     Running   0              92s
calico-node-krrn6                          1/1     Running   0              92s
calico-node-mgk2g                          1/1     Running   0              92s
calico-node-wr2pv                          1/1     Running   0              92s

8.安裝核心插件-Ingress控制器-Ingress-Nginx

Ingress是Kubernetes標准的資源類型之一,用於描述Service的七層實現,實現基於HTTP協議的反向代理功能,這在Web項目中是經常要用的。
"Ingress"功能的提供由Ingress控制器(插件)實現,ingress-nginx是常用的Ingress控制器。
參考文檔:
https://github.com/kubernetes/ingress-nginx
https://kubernetes.github.io/ingress-nginx/deploy/

8.1.查看兼容版本

Ingress-NGINX version	k8s supported version	        Alpine Version	Nginx Version
v1.1.3	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.4	        1.19.10†
v1.1.2	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.2	        1.19.9†
v1.1.1	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.2	        1.19.9†

8.2.搜索國內鏡像源

注:這邊需要修改一下鏡像源為國內克隆鏡像源,否則可能無法下載鏡像。
可以去DockerHUB中搜索一下對應版本的相關鏡像!
image.png
image.png

8.3.安裝Ingress-Nginx-Controller

[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml -O ingress-nginx.yaml
[root@k8s-master01 ~]# vim ingress-nginx.yaml
#image: k8s.gcr.io/ingress-nginx/controllerv1.1.2@...
image: willdockerhub/ingress-nginx-controller:v1.1.2
#image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@...
image: liangjw/kube-webhook-certgen:v1.1.1
[root@k8s-master01 ~]# kubectl apply -f ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

8.4.查看運行狀態

[root@k8s-master01 ~]# kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-6xk5t        0/1     Completed   0          11m
ingress-nginx-admission-patch-sp6w2         0/1     Completed   0          11m
ingress-nginx-controller-7bc7476f95-gdxkz   1/1     Running     0          11m

8.5.使用外部負載均衡器關聯Ingress控制器

外部主機想要訪問到Pod-Ingress控制器需要通過Service,默認情況下使用.yaml安裝Ingress-nginx-controller時會創建LoadBalancer類型的Service,以用於外部負載均衡器關聯並將訪問請求轉發至Ingress控制器處理。
LoadBalancer類型的Service是NodePort類型的上層實現,同理它會在每台節點主機上都開放一個映射端口,可用於外部負載均衡器進行關聯。

[root@k8s-master01 ~]# kubectl get service --namespace=ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.103.77.111   <pending>     80:30408/TCP,443:32686/TCP   20m
ingress-nginx-controller-admission   ClusterIP      10.98.133.60    <none>        443/TCP                      20m
[root@k8s-master01 ~]# netstat -lnupt  |grep -E '30408|32686'
tcp        1      0 0.0.0.0:30408           0.0.0.0:*               LISTEN      41631/kube-proxy    
tcp        0      0 0.0.0.0:32686           0.0.0.0:*               LISTEN      41631/kube-proxy

9.安裝常用插件-Metrics-Server

Metrices-Server,指標服務器,Metrices-Server是Kubernetes中的一個常用插件,它類似於Top命令,可以查看Kubernetes中Node和Pod的CPU和內存資源使用情況。
Metrices-Server每15秒收集一次指標,它在集群中的每個節點中運行,可擴展支持多達5000個節點的集群。
Metrices-Server從0.5版本開始默認情況下要求節點上需要的資源請求為100m的CPU和200MiB的內存,以保證100+節點數量的性能是良好的。
參考文檔:https://github.com/kubernetes-sigs/metrics-server

9.1.查看與Kuberneres的兼容性

Metrics Server	Metrics API group/version	Supported Kubernetes version
0.6.x	       metrics.k8s.io/v1beta1	        1.19+
0.5.x	       metrics.k8s.io/v1beta1	        *1.8+
0.4.x	       metrics.k8s.io/v1beta1	        *1.8+
0.3.x	       metrics.k8s.io/v1beta1	        1.8-1.21

9.2.搜索國內克隆鏡像

官方的安裝清單components.yaml默認情況下使用的鏡像倉庫為 k8s.gcr.io,在沒有FQ的情況下Pod運行可能無法正常獲取到Metrics-Server的安裝鏡像。
image.png

9.3.安裝Metrics-Server

Metrics-Server默認情況下在啟動的時候需要驗證kubelet提供的CA證書,這可能會導致其啟動失敗,所以需要添加參數"--kubelet-insecure-tls"禁用此校驗證書功能。

[root@k8s-master01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml
[root@k8s-master01 ~]# vim metrics-server.yaml
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: bitnami/metrics-server:0.6.1
[root@k8s-master01 ~]# kubectl apply -f metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@k8s-master01 ~]# kubectl get pods --namespace=kube-system |grep -E 'NAME|metrics-server'
NAME                                       READY   STATUS    RESTARTS       AGE
metrics-server-599b4c96ff-njg8b            1/1     Running   0              76s

9.4.查看集群中節點的資源使用情況

[root@k8s-master01 ~]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   331m         8%     1177Mi          68%       
k8s-master02   419m         10%    1216Mi          70%       
k8s-master03   344m         8%     1155Mi          67%       
k8s-node01     246m         6%     997Mi           57%    

9.5.查看集群中指定名稱空間下Pod的資源使用情況

[root@k8s-master01 ~]# kubectl top pod --namespace=kube-system
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-56fcbf9d6b-phf49   5m           29Mi            
calico-node-8frvw                          98m          120Mi           
calico-node-mzpmv                          71m          121Mi           
...   

10.安裝常用插件-Dashboard

Kubernetes Dashboard是Kubernetes集群的通用、基於Web的UI。它允許用戶管理集群中運行的應用程序並對其進行故障排除,以及管理集群本身。
Dashboard是Kubernetes的一個插件,由APIServer提供的一個URL提供訪問入口:/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy
當前你也可以通過Service直接訪問到DashBoard!
參考文檔:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/README.md#login-not-available

10.1.安裝Dashboard

根據配置清單安裝Dashboard,會創建Cluster類型的Service,僅只能從集群內部主機訪問到Dashboard,所以這邊需要簡單修改一下,將Service修改為NodePort類型,這樣外部主機也可以訪問它。

[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml -O dashboard.yaml
[root@k8s-master01 ~]# vim dashboard.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
[root@k8s-master01 ~]# kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master01 ~]# kubectl get pod --namespace=kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-xx9j7   1/1     Running   0          3m16s
kubernetes-dashboard-fb8648fd9-rgc2z         1/1     Running   0          3m17s

10.2.訪問到Dashboard

[root@k8s-master01 ~]# kubectl get service --namespace=kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.97.23.158    <none>        8000/TCP        4m6s
kubernetes-dashboard        NodePort    10.103.40.153   <none>        443:32358/TCP   4m7s
[root@k8s-master01 ~]# netstat -lnupt |grep 32358
tcp        0      0 0.0.0.0:32358           0.0.0.0:*               LISTEN      41631/kube-proxy  

瀏覽器輸入:https://<任一節點主機IP>: /#/login
image.png

10.3.選擇登錄到Dashboard要使用的身份認證方式

登錄進入Dashboard需要進行身份認證。
Dashboard服務在Pod中運行,Pod想要訪問並獲取到集群相關信息的話則需要創建一個ServiceAccount以驗證身份。
Dashboard想要管理Kubernetes集群需要進行身份認證,目前支持Token和Kubeconfig兩種方式。
Token
創建一個擁有集群角色"cluster-admin"的服務賬戶"dashboard-admin",然后使用dashboard-admin的Token即可!當然你也可以根據特殊需要創建擁有指定權限的集群角色將其綁定到對應的服務賬戶上,以管理集群中指定資源。

# 創建一個專用於Dashboard的服務賬戶"dashboard-admin"
[root@k8s-master01 ~]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
serviceaccount/dashboard-admin created
# 為服務賬戶"dashboard-admin"綁定到擁有超級管理員權限的集群角色"cluster-admin"
# 則dashboard-admin就擁有了超級管理員權限
[root@k8s-master01 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
# 創建的服務賬戶,會自動生成一個Token,它是Secret類型的資源對象
# 我們可以使用以下操作獲取到服務賬戶"dashboard-admin"的Token以用於Dashboard身份驗證
[root@k8s-master01 ~]# kubectl get secrets -n kubernetes-dashboard |grep dashboard-admin-token
dashboard-admin-token-2bxfl        kubernetes.io/service-account-token   3      66s
[root@k8s-master01 ~]# kubectl describe secrets/dashboard-admin-token-2bxfl -n kubernetes-dashboard
Name:         dashboard-admin-token-2bxfl
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 492a031e-db41-4a65-a8d4-af0e240e7f9d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1103 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImFXTzZFUElaS2RoTUpScHFwNzJSNUN5eU1lcFNSZEZqNWNNbi1VbFV2Zk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tMmJ4ZmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDkyYTAzMWUtZGI0MS00YTY1LWE4ZDQtYWYwZTI0MGU3ZjlkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.l5VEIPd9nIsJuXMh86rjFHhkIoZmg5nlDw7Bixn0b3-KT1r6o7WRegq8DJyVk_iiIfRnrrz5jjuOOkCKwXwvI1NCfVdsuBKXFwFZ1Crc-BwHjIxWbGuZfEGxSbN8du4T4xcUuNU-7HuZQcGDY23uy68aPqWSm8UoIcOFwUgVcYkKlOuW76tIXxG_upxWpWZz74aMDUIkjar7sdWXzMr1m5G43TLE9Z_lKCgoV-hc4Fo9_Er-TIAPqDG6-sfZZZ9Raldvn3j380QDYahUKaGKabnOFDXbODKOQ1VKRizgiRTOqt-z9YRPTcyxQzfheKC8DTb2X8D-E4x6azulenNgqw

Kubeconfig
Token是很長的復雜的密鑰字符串,使用它進行身份認證並不方便,所以Dashboard支持使用Kubeconfig文件的方式登陸到Dashboard。
基於上面Token的創建的服務賬戶,創建一個Kubeconfig配置文件。

# 查看集群信息
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.124.100:9443
# 創建kubeconfig文件並設置集群相關
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes --embed-certs=true --server="https://192.168.124.100:9443" --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=dashboard-admin.kubeconfig
# 設置認證相關到kubeconfig文件
# 默認情況下服務賬戶的Token是base64編碼格式,如果需要將其寫到kubeconfig中的則需要使用"base64 -d"進行解
# 碼
[root@k8s-master01 ~]# Token=$(kubectl get secrets/dashboard-admin-token-2bxfl -n kubernetes-dashboard -o jsonpath={.data.token} |base64 -d)
[root@k8s-master01 ~]# kubectl config set-credentials dashboard-admin --token=${Token} --kubeconfig=./dashboard-admin.kubeconfig 
# 設置上下文相關到kubeconfig文件
[root@k8s-master01 ~]# kubectl config set-context dashboard-admin --cluster=kubernetes  --user=dashboard-admin --kubeconfig=./dashboard-admin.kubeconfig 
# 設置當前要使用的上下文到kubeconfig文件
[root@k8s-master01 ~]# kubectl config use-context dashboard-admin --cluster=kubernetes  --user=dashboard-admin --kubeconfig=./dashboard-admin.kubeconfig
# 最后得到以下文件
[root@k8s-master01 ~]# cat dashboard-admin.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1EUXhNVEEwTXpnME1Gb1lEekl4TWpJd016RTRNRFF6T0RRd1dqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjR0RDRmU2ZmcHU1WS9KUGJrQWgvdG0xS1lSeWQ5YU9MVk9xTDQyc1M5YmxiZGh0WU9QSHYvWEpVb1k1ZSs5MXgKUE9NbnZnWmhiR29uditHQWVFazRKeUl4MTNiTm1XUk1DZ1QyYnJIWlhvcm5QeGE0ZlVsNHg5K2swVEc5ejdIMAo0cjF5MFkzWXNXaGJIeHBwL0hvQzNRR2JVWVJyMm03NVgxTWUvdFFCL25FcUNybUZxNkRveEU3REIxMkRnemE4CjBrM3FwZllGZHBOcnZBakdIcUlSZ0ZxT24ybDVkb0c3bGVhbkIrY2wxQWltUnZCMDdQdlVKdVhrK1N5NUhmdnMKNzYyYXJRYklNMUlISkJ0ZXBaQzVjYi9pNGZhcWNrTXJaeTZvanlnN2JPcjBuMlpQcHV5SnR5QjhLMnJDZCtYZApTeXlrZG44S0MxRlRSR0p6dkdpaVRRSURBUUFCbzFrd1Z6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVucEhxdGJzZ01CcSt4Q1MzTVErWnk4akFpeFV3RlFZRFZSMFIKQkE0d0RJSUthM1ZpWlhKdVpYUmxjekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRHhpR3c2bk5NV1hRMnVlRgppK2R2Nittc1FUT0JCWWJENkhIblVETUtnK0loaEcwclA5MkFRRjFWcUZaN1ZDSTgyWmJ5VnVWSmVvMjlkdjZpClBDeFJzWERxdHl0TG1CMkFPRUxXOFdqSCtheTZ5a3JYbGIwK1NIZ1Q3Q1NJRHhRdG9TeE8rK094WjhBb1JGMmUKTy94U1YxM0E0eG45RytmUEJETkVnWUJHbWd6L1RjSjZhYnljZnNNaGNwZ1kwKzJKZlJDemZBeFNiMld6TzBqaApucFRONUg2dG1ST3RlQ2h3anRWVDYrUXBUSzdkN0hjNmZlZ0w0S1pQZDEwZ0hyRFV1eWtpY01UNkpWNXNJSjArCmw5eWt2V1R2M2hEN0NJSmpJWnUySjdod0FGeW1hSmxzekZuZEpNZUFEL21pcDBMQk40OUdER2M2UFROdUw0WHEKeUxrYUhRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.124.100:9443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: dashboard-admin
  name: dashboard-admin
current-context: dashboard-admin
kind: Config
preferences: {}
users:
- name: dashboard-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImFXTzZFUElaS2RoTUpScHFwNzJSNUN5eU1lcFNSZEZqNWNNbi1VbFV2Zk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tMmJ4ZmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDkyYTAzMWUtZGI0MS00YTY1LWE4ZDQtYWYwZTI0MGU3ZjlkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.l5VEIPd9nIsJuXMh86rjFHhkIoZmg5nlDw7Bixn0b3-KT1r6o7WRegq8DJyVk_iiIfRnrrz5jjuOOkCKwXwvI1NCfVdsuBKXFwFZ1Crc-BwHjIxWbGuZfEGxSbN8du4T4xcUuNU-7HuZQcGDY23uy68aPqWSm8UoIcOFwUgVcYkKlOuW76tIXxG_upxWpWZz74aMDUIkjar7sdWXzMr1m5G43TLE9Z_lKCgoV-hc4Fo9_Er-TIAPqDG6-sfZZZ9Raldvn3j380QDYahUKaGKabnOFDXbODKOQ1VKRizgiRTOqt-z9YRPTcyxQzfheKC8DTb2X8D-E4x6azulenNgqw

10.4.選擇Kubeconfig文件登陸Dashboard即可

image.png
image.png

附錄

查看Kubernetes與Docker兼容性

訪問網址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
image.png

重置節點

當在使用"kubeadm init"或"kubeadm join"部署節點出現失敗狀況時,可以使用以下操作對節點進行重置!
注:重置會將節點恢復到未部署前狀態,若集群已正常工作則無需重置,否則將引起不可恢復的集群故障!

[root@k8s-master01 ~]# kubeadm reset -f
[root@k8s-master01 ~]# ipvsadm --clear
[root@k8s-master01 ~]# iptables -F && iptables -X && iptables -Z

常用查看命令

更多的操作請完整學習Kubernetes的資源和集群管理!
查看令牌(Token)列表:

[root@k8s-master01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
ek6xtl.s3dk4vjxzp83bcx3   1h          2022-04-06T13:30:39Z   <none>                   Proxy for managing TTL for the kubeadm-certs secret        <none>

查看kubernetes集群中證書到期時間:

[root@k8s-master01 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 18, 2122 04:02 UTC   99y                                     no      
apiserver                  Mar 18, 2122 04:02 UTC   99y             ca                      no      
apiserver-etcd-client      Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Mar 18, 2122 04:02 UTC   99y             ca                      no      
controller-manager.conf    Mar 18, 2122 04:02 UTC   99y                                     no      
etcd-healthcheck-client    Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
etcd-peer                  Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
etcd-server                Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
front-proxy-client         Mar 18, 2122 04:02 UTC   99y             front-proxy-ca          no      
scheduler.conf             Mar 18, 2122 04:02 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 18, 2122 04:02 UTC   99y             no      
etcd-ca                 Mar 18, 2122 04:02 UTC   99y             no      
front-proxy-ca          Mar 18, 2122 04:02 UTC   99y             no    

查看節點運行狀態:

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   40h   v1.23.0
k8s-master02   Ready    control-plane,master   40h   v1.23.0
k8s-master03   Ready    control-plane,master   40h   v1.23.0
k8s-node01     Ready    <none>                 39h   v1.23.0

查看Kubeadm初始化控制平面默認使用的配置信息:

[root@k8s-master01 ~]# kubeadm config print init-defaults

查看Kubeadm部署安裝Kubernetes集群所要使用的容器鏡像列表:

[root@k8s-master01 ~]# kubeadm config images list

新的Pod可以調度到Master節點上運行嗎?

可以,默認情況下Master節點在創建的時候,就已經被填充了污點"taints",如果想要在Master節點上運行Pod,只需要將"taints"刪除即可!(不建議的操作)

[root@k8s-master01 ~]# kubectl describe nodes/k8s-master01
Name:               k8s-master01
...
Taints:             node-role.kubernetes.io/master:NoSchedule
...
[root@k8s-master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-

集群最大支持多少個節點那?

參考文檔:https://kubernetes.io/docs/setup/best-practices/cluster-large/
Kubernetes集群是由一組運行有Kubernetes代理的節點(物理機/虛擬機)組成,由控制平面節點管理着工作節點。
Kubernetes-v1.23.x理論上支持5000個節點的集群,其中:

  • 每個節點不超過110個Pod;
  • 集群中總的Pod數量不超過150000個;
  • 集群中總的容器數量不超過300000個。
    以上數據僅是官方實踐后得出的結論!
    工作節點由一個或多個控制平面節點管理,控制平面節點可以管理工作節點的數量取決於控制平面節點所在物理主機的CPU、內存、磁盤IO和空間使用情況!那么這時候對主機以及相關組件做好監控是非常重要的!
    其他人員的經驗:
    一台1核2GB的控制平面節點可以管理約5個工作節點!
    一台32核120GB的控制平面節點可以管理約500個工作節點!
    以上數據僅供參考!

新節點加入到集群手動生成Token和Key

新Master節點加入到集群:

# 1、生成新的Token
[root@k8s-master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145 

# 2、生成certificate key
[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
183fabc2947677ab3f4ef1fd026b4906268cdc4987f3daa4ac96862f2e88c98f

# 3、新的Master節點加入到集群
[root@k8s-master02 ~]# kubeadm_src join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145  --control-plane --certificate-key 183fabc2947677ab3f4ef1fd026b4906268cdc4987f3daa4ac96862f2e88c98f

新Node節點加入到集群:

# 1、生成新的Token並打印命令
[root@k8s-master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145 

# 2、新的Node節點加入到集群
[root@k8s-node01 ~]# kubeadm join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM