一.kubernetes Master節點和Node節點上各組件的基本作用
Kubernetes集群分為一個Master節點和若干Node節點。
Master
Master是Kubernetes 的主節點。
Master組件可以在集群中任何節點上運行。但是為了簡單起見,通常在一台虛擬機上啟動所有Master組件,並且不會在此VM機器上運行用戶容器。
集群所有的控制命令都傳遞給Master組件,在Master節點上運行。kubectl命令在其他Node節點上無法執行。
Master節點上面主要由四個模塊組成:etcd、api server、controller manager、scheduler。
四個組件的主要功能可以概括為:
api server:負責對外提供restful的Kubernetes API服務,其他Master組件都通過調用api server提供的rest接口實現各自的功能,如controller就是通過api server來實時監控各個資源的狀態的。
etcd:是 Kubernetes 提供的一個高可用的鍵值數據庫,用於保存集群所有的網絡配置和資源對象的狀態信息,也就是保存了整個集群的狀態。數據變更都是通過api server進行的。整個kubernetes系統中一共有兩個服務需要用到etcd用來協同和存儲配置,分別是:
1)網絡插件flannel,其它網絡插件也需要用到etcd存儲網絡的配置信息;
2)kubernetes本身,包括各種資源對象的狀態和元信息配置。
scheduler:監聽新建pod副本信息,並通過調度算法為該pod選擇一個最合適的Node節點。會檢索到所有符合該pod要求的Node節點,執行pod調度邏輯。調度成功之后,會將pod信息綁定到目標節點上,同時將信息寫入到etcd中。一旦綁定,就由Node上的kubelet接手pod的接下來的生命周期管理。如果把scheduler看成一個黑匣子,那么它的輸入是pod和由多個Node組成的列表,輸出是pod和一個Node的綁定,即將這個pod部署到這個Node上。Kubernetes目前提供了調度算法,但是同樣也保留了接口,用戶可以根據自己的需求定義自己的調度算法。
controller manager:負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等。每個資源一般都對應有一個控制器,這些controller通過api server實時監控各個資源的狀態,controller manager就是負責管理這些控制器的。當有資源因為故障導致狀態變化,controller就會嘗試將系統由“現有狀態”恢復到“期待狀態”,保證其下每一個controller所對應的資源始終處於期望狀態。比如我們通過api server創建一個pod,當這個pod創建成功后,api server的任務就算完成了。而后面保證pod的狀態始終和我們預期的一樣的重任就由controller manager去保證了。
controller manager 包括運行管理控制器(kube-controller-manager)和雲管理控制器(cloud-controller-manager)
Kubernetes Master 的架構為:
Node
Node上運行着Master分配的pod,當一個Node宕機,其上的pod會被自動轉移到其他Node上。
每一個Node節點都安裝了Node組件,包括kubelet、kube-proxy、container runtime。
kubelet 會監視已分配給節點的pod,負責pod的生命周期管理,同時與Master密切協作,維護和管理該Node上面的所有容器,實現集群管理的基本功能。即Node節點通過kubelet與master組件交互,可以理解為kubelet是Master在每個Node節點上面的agent。本質上,它負責使Pod的運行狀態與期望的狀態一致。
kube-proxy 是實現service的通信與負載均衡機制的重要組件,將到service的請求轉發到后端的pod上。
Container runtime:容器運行環境,目前Kubernetes支持docker和rkt兩種容器。
Node 的架構為:
通常不會在Master節點上運行任何用戶容器,Node從節點才是真正運行工作負載的節點。
整體架構如下:
抽象后的架構為:
除了核心組件,還有一些推薦的Add-ons:
- kube-dns負責為整個集群提供DNS服務
- Ingress Controller為服務提供外網入口
- Heapster提供資源監控
- Dashboard提供GUI
- Federation提供跨可用區的集群
二.kubernetes環境中pod服務的訪問流程
你的請求先通過防火牆到負載均衡器,從負載均衡器到nodeport,nodeport轉給service,service再轉給服務,seivice基於label標簽篩選后面的pod
三.kubeadm Init 命令的工作流程
kubeadm init
命令通過執行下列步驟來啟動一個 Kubernetes 控制平面節點。
-
在做出變更前運行一系列的預檢項來驗證系統狀態。一些檢查項目僅僅觸發警告, 其它的則會被視為錯誤並且退出 kubeadm,除非問題得到解決或者用戶指定了
--ignore-preflight-errors=<錯誤列表>
參數。 -
生成一個自簽名的 CA 證書來為集群中的每一個組件建立身份標識。 用戶可以通過將其放入
--cert-dir
配置的證書目錄中(默認為/etc/kubernetes/pki
) 來提供他們自己的 CA 證書以及/或者密鑰。 APIServer 證書將為任何--apiserver-cert-extra-sans
參數值提供附加的 SAN 條目,必要時將其小寫。 -
將 kubeconfig 文件寫入
/etc/kubernetes/
目錄以便 kubelet、控制器管理器和調度器用來連接到 API 服務器,它們每一個都有自己的身份標識,同時生成一個名為admin.conf
的獨立的 kubeconfig 文件,用於管理操作。 -
為 API 服務器、控制器管理器和調度器生成靜態 Pod 的清單文件。假使沒有提供一個外部的 etcd 服務的話,也會為 etcd 生成一份額外的靜態 Pod 清單文件。
靜態 Pod 的清單文件被寫入到
/etc/kubernetes/manifests
目錄; kubelet 會監視這個目錄以便在系統啟動的時候創建 Pod。一旦控制平面的 Pod 都運行起來,
kubeadm init
的工作流程就繼續往下執行。 -
對控制平面節點應用標簽和污點標記以便不會在它上面運行其它的工作負載。
-
生成令牌,將來其他節點可使用該令牌向控制平面注冊自己。 如 kubeadm token 文檔所述, 用戶可以選擇通過
--token
提供令牌。 -
為了使得節點能夠遵照啟動引導令牌 和 TLS 啟動引導 這兩份文檔中描述的機制加入到集群中,kubeadm 會執行所有的必要配置:
- 創建一個 ConfigMap 提供添加集群節點所需的信息,並為該 ConfigMap 設置相關的 RBAC 訪問規則。
- 允許啟動引導令牌訪問 CSR 簽名 API。
- 配置自動簽發新的 CSR 請求。
更多相關信息,請查看 kubeadm join。
-
通過 API 服務器安裝一個 DNS 服務器 (CoreDNS) 和 kube-proxy 附加組件。 在 Kubernetes 版本 1.11 和更高版本中,CoreDNS 是默認的 DNS 服務器。 請注意,盡管已部署 DNS 服務器,但直到安裝 CNI 時才調度它。
警告: 從 v1.18 開始,在 kubeadm 中使用 kube-dns 的支持已被廢棄,並已在 v1.21 版本中刪除。
四.k8s部署(基於kubeadm)
1.基礎環境准備
注意:
關閉selinux
關閉防火牆,
2. keepalived安裝及配置
節點1安裝及配置keepalived:
root@k8s-ha1:~# cat install_keepalived_for_ha1.sh
#!/bin/bash
#
#******************************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-03-31
#FileName: install_keepalived.sh
#URL: www.cnblogs.com/neteagles
#Description: install_keepalived for centos 7/8 & ubuntu 18.04/20.04
#Copyright (C): 2021 All rights reserved
#******************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.2.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip a |awk -F"[: ]" '/^2/{print $3}'`
STATE=MASTER
PRIORITY=100
VIP=10.0.0.188
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
OS_RELEASE_VERSION=`lsb_release -rs |awk -F'.' '{print $1}'`
}
check_file (){
cd ${SRC_DIR}
if [ ${OS_ID} == "CentOS" ] &> /dev/null;then
rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
fi
if [ ! -e ${KEEPALIVED_FILE} ];then
${COLOR}"缺少${KEEPALIVED_FILE}文件"${END}
${COLOR}'開始下載KEEPALIVED源碼包'${END}
wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"KEEPALIVED源碼包下載失敗"${END}; exit; }
else
${COLOR}"相關文件已准備好"${END}
fi
}
install_keepalived(){
${COLOR}"開始安裝KEEPALIVED"${END}
${COLOR}"開始安裝KEEPALIVED依賴包"${END}
if [[ ${OS_RELEASE_VERSION} == 8 ]] &> /dev/null;then
cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://mirrors.aliyun.com/centos/8/PowerTools/x86_64/os/
https://mirrors.huaweicloud.com/centos/8/PowerTools/x86_64/os/
https://mirrors.cloud.tencent.com/centos/8/PowerTools/x86_64/os/
https://mirrors.tuna.tsinghua.edu.cn/centos/8/PowerTools/x86_64/os/
http://mirrors.163.com/centos/8/PowerTools/x86_64/os/
http://mirrors.sohu.com/centos/8/PowerTools/x86_64/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> /dev/null
elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> /dev/null;then
yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> /dev/null
elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> /dev/null;then
apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
else
apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> /dev/null
fi
tar xf ${KEEPALIVED_FILE}
KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).*/\1/p'`
cd ${KEEPALIVED_DIR}
./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
make -j $CPUS && make install
[ $? -eq 0 ] && $COLOR"KEEPALIVED編譯安裝成功"$END || { $COLOR"KEEPALIVED編譯安裝失敗,退出!"$END;exit; }
[ -d /etc/keepalived ] || mkdir -p /etc/keepalived &> /dev/null
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state ${STATE}
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority ${PRIORITY}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
${VIP} dev eth0 label eth0:1
}
}
EOF
cp ./keepalived/keepalived.service /lib/systemd/system/
echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/keepalived.sh
systemctl daemon-reload
systemctl enable --now keepalived &> /dev/null
systemctl is-active keepalived &> /dev/null || { ${COLOR}"KEEPALIVED 啟動失敗,退出!"${END} ; exit; }
${COLOR}"KEEPALIVED安裝完成"${END}
}
main(){
os
check_file
install_keepalived
}
main
root@k8s-ha1:/usr/local/src# bash install_keepalived_for_ha1.sh
[C:\~]$ ping 10.0.0.188
正在 Ping 10.0.0.188 具有 32 字節的數據:
來自 10.0.0.188 的回復: 字節=32 時間<1ms TTL=64
來自 10.0.0.188 的回復: 字節=32 時間<1ms TTL=64
來自 10.0.0.188 的回復: 字節=32 時間<1ms TTL=64
來自 10.0.0.188 的回復: 字節=32 時間<1ms TTL=64
10.0.0.188 的 Ping 統計信息:
數據包: 已發送 = 4,已接收 = 4,丟失 = 0 (0% 丟失),
往返行程的估計時間(以毫秒為單位):
最短 = 0ms,最長 = 0ms,平均 = 0ms
root@k8s-ha1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:b9:92:79 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.104/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.188/32 scope global eth0:0 #這里可以看到vip地址
valid_lft forever preferred_lft forever
節點2安裝及配置keepalived:
root@k8s-ha2:~# cat install_keepalived_for_ha2.sh
#!/bin/bash
#
#******************************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-03-31
#FileName: install_keepalived.sh
#URL: www.cnblogs.com/neteagles
#Description: install_keepalived for centos 7/8 & ubuntu 18.04/20.04
#Copyright (C): 2021 All rights reserved
#******************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.2.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip a |awk -F"[: ]" '/^2/{print $3}'`
STATE=BACKUP
PRIORITY=80
VIP=10.0.0.188
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
OS_RELEASE_VERSION=`lsb_release -rs |awk -F'.' '{print $1}'`
}
check_file (){
cd ${SRC_DIR}
if [ ${OS_ID} == "CentOS" ] &> /dev/null;then
rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
fi
if [ ! -e ${KEEPALIVED_FILE} ];then
${COLOR}"缺少${KEEPALIVED_FILE}文件"${END}
${COLOR}'開始下載KEEPALIVED源碼包'${END}
wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"KEEPALIVED源碼包下載失敗"${END}; exit; }
else
${COLOR}"相關文件已准備好"${END}
fi
}
install_keepalived(){
${COLOR}"開始安裝KEEPALIVED"${END}
${COLOR}"開始安裝KEEPALIVED依賴包"${END}
if [[ ${OS_RELEASE_VERSION} == 8 ]] &> /dev/null;then
cat > /etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://mirrors.aliyun.com/centos/8/PowerTools/x86_64/os/
https://mirrors.huaweicloud.com/centos/8/PowerTools/x86_64/os/
https://mirrors.cloud.tencent.com/centos/8/PowerTools/x86_64/os/
https://mirrors.tuna.tsinghua.edu.cn/centos/8/PowerTools/x86_64/os/
http://mirrors.163.com/centos/8/PowerTools/x86_64/os/
http://mirrors.sohu.com/centos/8/PowerTools/x86_64/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> /dev/null
elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> /dev/null;then
yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> /dev/null
elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> /dev/null;then
apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
else
apt update &> /dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> /dev/null
fi
tar xf ${KEEPALIVED_FILE}
KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).*/\1/p'`
cd ${KEEPALIVED_DIR}
./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
make -j $CPUS && make install
[ $? -eq 0 ] && $COLOR"KEEPALIVED編譯安裝成功"$END || { $COLOR"KEEPALIVED編譯安裝失敗,退出!"$END;exit; }
[ -d /etc/keepalived ] || mkdir -p /etc/keepalived &> /dev/null
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state ${STATE}
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority ${PRIORITY}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
${VIP} dev eth0 label eth0:1
}
}
EOF
cp ./keepalived/keepalived.service /lib/systemd/system/
echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/keepalived.sh
systemctl daemon-reload
systemctl enable --now keepalived &> /dev/null
systemctl is-active keepalived &> /dev/null || { ${COLOR}"KEEPALIVED 啟動失敗,退出!"${END} ; exit; }
${COLOR}"KEEPALIVED安裝完成"${END}
}
main(){
os
check_file
install_keepalived
}
main
root@k8s-ha2:/usr/local/src# bash install_keepalived_for_ha2.sh
root@k8s-ha2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:1e:39 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.105/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed7:1e39/64 scope link
valid_lft forever preferred_lft forever
驗證keepalived高可用:
root@k8s-ha1:~# systemctl stop keepalived
root@k8s-ha2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:03:34:7d brd ff:ff:ff:ff:ff:ff
inet 10.0.0.105/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.188/32 scope global eth0:1 #vip飄到ha2
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe03:347d/64 scope link
valid_lft forever preferred_lft forever
root@k8s-ha1:~# systemctl start keepalived
root@k8s-ha1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:26:08:f4 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.104/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.188/32 scope global eth0:1 #vip飄回到ha1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe26:8f4/64 scope link
valid_lft forever preferred_lft forever
3.haproxy安裝及配置
節點1安裝及配置haproxy:
root@k8s-ha1:~# cat install_haproxy.sh
#!/bin/bash
#
#******************************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-04-03
#FileName: install_haproxy.sh
#URL: www.cnblogs.com/neteagles
#Description: install_haproxy for centos 7/8 & ubuntu 18.04/20.04
#Copyright (C): 2021 All rights reserved
#******************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
LUA_FILE=lua-5.4.3.tar.gz
HAPROXY_FILE=haproxy-2.2.12.tar.gz
HAPROXY_INSTALL_DIR=/apps/haproxy
STATS_AUTH_USER=admin
STATS_AUTH_PASSWORD=123456
VIP=10.0.0.188
MASTER1=10.0.0.101
MASTER2=10.0.0.102
MASTER3=10.0.0.103
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
}
check_file (){
cd ${SRC_DIR}
${COLOR}'檢查HAPROXY相關源碼包'${END}
if [ ! -e ${LUA_FILE} ];then
${COLOR}"缺少${LUA_FILE}文件"${END}
exit
elif [ ! -e ${HAPROXY_FILE} ];then
${COLOR}"缺少${HAPROXY_FILE}文件"${END}
exit
else
${COLOR}"相關文件已准備好"${END}
fi
}
install_haproxy(){
${COLOR}"開始安裝HAPROXY"${END}
${COLOR}"開始安裝HAPROXY依賴包"${END}
if [ ${OS_ID} == "CentOS" ] &> /dev/null;then
yum -y install gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel libtermcap-devel ncurses-devel libevent-devel readline-devel &> /dev/null
else
apt update &> /dev/null;apt -y install gcc make openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev libreadline-dev libsystemd-dev &> /dev/null
fi
tar xf ${LUA_FILE}
LUA_DIR=`echo ${LUA_FILE} | sed -nr 's/^(.*[0-9]).*/\1/p'`
cd ${LUA_DIR}
make all test
cd ${SRC_DIR}
tar xf ${HAPROXY_FILE}
HAPROXY_DIR=`echo ${HAPROXY_FILE} | sed -nr 's/^(.*[0-9]).*/\1/p'`
cd ${HAPROXY_DIR}
make -j ${CPUS} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=/usr/local/src/${LUA_DIR}/src/ LUA_LIB=/usr/local/src/${LUA_DIR}/src/ PREFIX=${HAPROXY_INSTALL_DIR}
make install PREFIX=${HAPROXY_INSTALL_DIR}
[ $? -eq 0 ] && $COLOR"HAPROXY編譯安裝成功"$END || { $COLOR"HAPROXY編譯安裝失敗,退出!"$END;exit; }
cat > /lib/systemd/system/haproxy.service <<-EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
EOF
[ -L /usr/sbin/haproxy ] || ln -s ../../apps/haproxy/sbin/haproxy /usr/sbin/ &> /dev/null
[ -d /etc/haproxy ] || mkdir /etc/haproxy &> /dev/null
[ -d /var/lib/haproxy/ ] || mkdir -p /var/lib/haproxy/ &> /dev/null
cat > /etc/haproxy/haproxy.cfg <<-EOF
global
maxconn 100000
chroot /apps/haproxy
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
#nbproc 4
#cpu-map 1 0
#cpu-map 2 1
#cpu-map 3 2
#cpu-map 4 3
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth ${STATS_AUTH_USER}:${STATS_AUTH_PASSWORD}
listen kubernetes-6443
bind ${VIP}:6443
mode tcp
log global
server ${MASTER1} ${MASTER1}:6443 check inter 3000 fall 2 rise 5
server ${MASTER2} ${MASTER2}:6443 check inter 3000 fall 2 rise 5
server ${MASTER3} ${MASTER3}:6443 check inter 3000 fall 2 rise 5
EOF
cat >> /etc/sysctl.conf <<-EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
sysctl -p &> /dev/null
echo "PATH=${HAPROXY_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/haproxy.sh
systemctl daemon-reload
systemctl enable --now haproxy &> /dev/null
systemctl is-active haproxy &> /dev/null || { ${COLOR}"HAPROXY 啟動失敗,退出!"${END} ; exit; }
${COLOR}"HAPROXY安裝完成"${END}
}
main(){
os
check_file
install_haproxy
}
main
root@k8s-ha1:/usr/local/src# bash install_haproxy.sh
root@k8s-ha1:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 10.0.0.188:6443 0.0.0.0:*
LISTEN 0 128 0.0.0.0:9999 0.0.0.0:*
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
節點2安裝及配置haproxy:
root@k8s-ha2:/usr/local/src# bash install_haproxy.sh
root@k8s-ha2:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 20480 0.0.0.0:9999 0.0.0.0:*
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 20480 10.0.0.188:6443 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
http://10.0.0.188:9999/haproxy-status
用戶名:admin 密碼:123456
4.harbor安裝
root@k8s-harbor:/usr/local/src# cat install_docker_compose_harbor1.8.6.sh
#!/bin/bash
#
#****************************************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-04-30
#FileName: install_docker__compose_harbor1.8.6.sh
#URL: www.cnblogs.com/neteagles
#Description: install_docker__compose_harbor1.8.6 for centos 7/8 & ubuntu 18.04/20.04
#Copyright (C): 2021 All rights reserved
#****************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
URL='https://download.docker.com/linux/static/stable/x86_64/'
DOCKER_FILE=docker-19.03.9.tgz
DOCKER_COMPOSE_FILE=docker-compose-Linux-x86_64-
DOCKER_COMPOSE_VERSION=1.27.4
HARBOR_FILE=harbor-offline-installer-v
HARBOR_VERSION=1.8.6
TAR=.tgz
HARBOR_INSTALL_DIR=/apps
IPADDR=`hostname -I|awk '{print $1}'`
HOSTNAME=harbor.neteagles.vip
HARBOR_ADMIN_PASSWORD=123456
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
}
check_file (){
cd ${SRC_DIR}
rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
if [ ! -e ${DOCKER_FILE} ];then
${COLOR}"缺少${DOCKER_FILE}文件"${END}
${COLOR}'開始下載DOCKER二進制源碼包'${END}
wget ${URL}${DOCKER_FILE} || { ${COLOR}"DOCKER二進制安裝包下載失敗"${END}; exit; }
elif [ ! -e ${DOCKER_COMPOSE_FILE}${DOCKER_COMPOSE_VERSION} ];then
${COLOR}"缺少${DOCKER_COMPOSE_FILE}${DOCKER_COMPOSE_VERSION}文件"${END}
exit
elif [ ! -e ${HARBOR_FILE}${HARBOR_VERSION}${TAR} ];then
${COLOR}"缺少${HARBOR_FILE}${HARBOR_VERSION}${TAR}文件"${END}
exit
else
${COLOR}"相關文件已准備好"${END}
fi
}
install_docker(){
tar xf ${DOCKER_FILE}
mv docker/* /usr/bin/
cat > /lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF
echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
systemctl daemon-reload
systemctl enable --now docker &> /dev/null
systemctl is-active docker &> /dev/null && ${COLOR}"Docker 服務啟動成功"${END} || { ${COLOR}"Docker 啟動失敗"${END};exit; }
docker version && ${COLOR}"Docker 安裝成功"${END} || ${COLOR}"Docker 安裝失敗"${END}
}
install_docker_compose(){
${COLOR}"開始安裝 Docker compose....."${END}
sleep 1
mv ${SRC_DIR}/${DOCKER_COMPOSE_FILE}${DOCKER_COMPOSE_VERSION} /usr/bin/docker-compose
chmod +x /usr/bin/docker-compose
docker-compose --version && ${COLOR}"Docker Compose 安裝完成"${END} || ${COLOR}"Docker compose 安裝失敗"${END}
}
install_harbor(){
${COLOR}"開始安裝 Harbor....."${END}
sleep 1
[ -d ${HARBOR_INSTALL_DIR} ] || mkdir ${HARBOR_INSTALL_DIR}
tar -xvf ${SRC_DIR}/${HARBOR_FILE}${HARBOR_VERSION}${TAR} -C ${HARBOR_INSTALL_DIR}/
sed -i.bak -e 's/^hostname: .*/hostname: '''${HOSTNAME}'''/' -e 's/^harbor_admin_password: .*/harbor_admin_password: '''${HARBOR_ADMIN_PASSWORD}'''/' -e 's/^https:/#https:/' -e 's/ port: 443/ #port: 443/' -e 's@ certificate: /your/certificate/path@ #certificate: /your/certificate/path@' -e 's@ private_key: /your/private/key/path@ #private_key: /your/private/key/path@' ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
if [ ${OS_ID} == "CentOS" ] &> /dev/null;then
yum -y install python &> /dev/null || { ${COLOR}"安裝軟件包失敗,請檢查網絡配置"${END}; exit; }
else
apt -y install python &> /dev/null || { ${COLOR}"安裝軟件包失敗,請檢查網絡配置"${END}; exit; }
fi
${HARBOR_INSTALL_DIR}/harbor/install.sh && ${COLOR}"Harbor 安裝完成"${END} || ${COLOR}"Harbor 安裝失敗"${END}
cat > /lib/systemd/system/harbor.service <<-EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable harbor &>/dev/null && ${COLOR}"Harbor已配置為開機自動啟動"${END}
}
set_swap_limit(){
${COLOR}'設置Docker的"WARNING: No swap limit support"警告'${END}
chmod u+w /etc/default/grub
sed -i.bak 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX=" net.ifnames=0 cgroup_enable=memory swapaccount=1"/' /etc/default/grub
chmod u-w /etc/default/grub ;update-grub
${COLOR}"10秒后,機器會自動重啟"${END}
sleep 10
reboot
}
main(){
os
check_file
dpkg -s docker-ce &> /dev/null && ${COLOR}"Docker已安裝"${END} || install_docker
docker-compose --version &> /dev/null && ${COLOR}"Docker Compose已安裝"${END} || install_docker_compose
install_harbor
set_swap_limit
}
main
root@k8s-harbor:/usr/local/src# bash install_docker_compose_harbor1.8.6.sh
在windows系統,C:\Windows\System32\drivers\etc\hosts文件里添加一行
10.0.0.106 harbor.neteagles.vip
http://harbor.neteagles.vip/
用戶名:admin 密碼:123456
5.安裝docker
在master1、master2、master3、node1、node2、node3上安裝docker
root@k8s-master1:/usr/local/src# cat install_docker_binary.sh
#!/bin/bash
#
#****************************************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-04-30
#FileName: install_docker_binary.sh
#URL: www.cnblogs.com/neteagles
#Description: install_docker_binary for centos 7/8 & ubuntu 18.04/20.04
#Copyright (C): 2021 All rights reserved
#****************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
URL='https://download.docker.com/linux/static/stable/x86_64/'
DOCKER_FILE=docker-19.03.9.tgz
check_file (){
cd ${SRC_DIR}
rpm -q wget &> /dev/null || yum -y install wget &> /dev/null
if [ ! -e ${DOCKER_FILE} ];then
${COLOR}"缺少${DOCKER_FILE}文件"${END}
${COLOR}'開始下載DOCKER二進制安裝包'${END}
wget ${URL}${DOCKER_FILE} || { ${COLOR}"DOCKER二進制安裝包下載失敗"${END}; exit; }
else
${COLOR}"相關文件已准備好"${END}
fi
}
install(){
tar xf ${DOCKER_FILE}
mv docker/* /usr/bin/
cat > /lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl enable --now docker &> /dev/null
systemctl is-active docker &> /dev/null && ${COLOR}"Docker 服務啟動成功"${END} || { ${COLOR}"Docker 啟動失敗"${END};exit; }
docker version && ${COLOR}"Docker 安裝成功"${END} || ${COLOR}"Docker 安裝失敗"${END}
}
set_alias(){
echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
}
set_swap_limit(){
${COLOR}'設置Docker的"WARNING: No swap limit support"警告'${END}
chmod u+w /etc/default/grub
sed -i.bak 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX=" net.ifnames=0 cgroup_enable=memory swapaccount=1"/' /etc/default/grub
chmod u-w /etc/default/grub ;update-grub
${COLOR}"10秒后,機器會自動重啟"${END}
sleep 10
reboot
}
main(){
check_file
install
set_alias
set_swap_limit
}
main
6.在所有master節點安裝kubeadm、kubelet、kubectl
root@k8s-master1:~# cat install_kubeadm_for_master.sh
#!/bin/bash
#
#********************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-05-07
#FileName: install_kubeadm_for_master.sh
#URL: www.cnblogs.com/neteagles
#Description: The test script
#Copyright (C): 2021 All rights reserved
#********************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
VERSION=1.20.5-00
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
OS_RELEASE=`lsb_release -rs`
}
install_kubeadm(){
${COLOR}"開始安裝Kubeadm依賴包"${END}
apt update &> /dev/null && apt install -y apt-transport-https &> /dev/null
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> /dev/null
echo "deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
apt update &> /dev/null
${COLOR}"Kubeadm有以下版本"${END}
apt-cache madison kubeadm
${COLOR}"10秒后即將安裝:Kubeadm-"${VERSION}"版本......"${END}
${COLOR}"如果想安裝其它Kubeadm版本,請按Ctrl+c鍵退出,修改版本再執行"${END}
sleep 10
${COLOR}"開始安裝Kubeadm"${END}
apt -y install kubelet=${VERSION} kubeadm=${VERSION} kubectl=${VERSION} &> /dev/null
${COLOR}"Kubeadm安裝完成"${END}
}
images_download(){
${COLOR}"開始下載Kubeadm鏡像"${END}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
${COLOR}"Kubeadm鏡像下載完成"${END}
}
set_swap(){
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
${COLOR}"${OS_ID} ${OS_RELEASE} 禁用swap成功!"${END}
}
set_kernel(){
cat > /etc/sysctl.conf <<-EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p &> /dev/null
${COLOR}"${OS_ID} ${OS_RELEASE} 優化內核參數成功!"${END}
}
set_limits(){
cat >> /etc/security/limits.conf <<-EOF
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
EOF
${COLOR}"${OS_ID} ${OS_RELEASE} 優化資源限制參數成功!"${END}
}
main(){
os
install_kubeadm
images_download
set_swap
set_kernel
set_limits
}
main
#安裝完重啟系統
7.在所有node節點安裝kubeadm、kubelet
#!/bin/bash
#
#********************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-05-07
#FileName: install_kubeadm_for_node.sh
#URL: www.cnblogs.com/neteagles
#Description: The test script
#Copyright (C): 2021 All rights reserved
#********************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
VERSION=1.20.5-00
os(){
if grep -Eqi "CentOS" /etc/issue || grep -Eq "CentOS" /etc/*-release;then
rpm -q redhat-lsb-core &> /dev/null || { ${COLOR}"安裝lsb_release工具"${END};yum -y install redhat-lsb-core &> /dev/null; }
fi
OS_ID=`lsb_release -is`
OS_RELEASE=`lsb_release -rs`
}
install_kubeadm(){
${COLOR}"開始安裝Kubeadm依賴包"${END}
apt update &> /dev/null && apt install -y apt-transport-https &> /dev/null
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> /dev/null
echo "deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
apt update &> /dev/null
${COLOR}"Kubeadm有以下版本"${END}
apt-cache madison kubeadm
${COLOR}"10秒后即將安裝:Kubeadm-"${VERSION}"版本......"${END}
${COLOR}"如果想安裝其它Kubeadm版本,請按Ctrl+c鍵退出,修改版本再執行"${END}
sleep 10
${COLOR}"開始安裝Kubeadm"${END}
apt -y install kubelet=${VERSION} kubeadm=${VERSION} &> /dev/null
${COLOR}"Kubeadm安裝完成"${END}
}
set_swap(){
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
${COLOR}"${OS_ID} ${OS_RELEASE} 禁用swap成功!"${END}
}
set_kernel(){
cat > /etc/sysctl.conf <<-EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p &> /dev/null
${COLOR}"${OS_ID} ${OS_RELEASE} 優化內核參數成功!"${END}
}
set_limits(){
cat >> /etc/security/limits.conf <<-EOF
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
EOF
${COLOR}"${OS_ID} ${OS_RELEASE} 優化資源限制參數成功!"${END}
}
main(){
os
install_kubeadm
set_swap
set_kernel
set_limits
}
main
#安裝完重啟系統
8.初始化高可用master
root@k8s-master1:~# kubeadm init --apiserver-advertise-address=10.0.0.101 --control-plane-endpoint=10.0.0.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=neteagles.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.neteagles.local] and IPs [10.200.0.1 10.0.0.101 10.0.0.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1.example.local localhost] and IPs [10.0.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1.example.local localhost] and IPs [10.0.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 77.536995 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9enh01.q35dsb6rdin4hbuj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
#提示需要安裝網絡組件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
#下面信息添加master,不過需要生成證書才能添加
kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
#下面信息添加node
kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760
9. 安裝網絡組件calico
root@k8s-master1:~# mkdir -p $HOME/.kube
root@k8s-master1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#部署calico網絡組件
root@k8s-master1:~# wget https://docs.projectcalico.org/v3.14/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
root@k8s-master1:~# vim calico.yaml
#將下面內容
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
#修改為
- name: CALICO_IPV4POOL_CIDR
value: "10.100.0.0/16"
:wq
root@k8s-master1:~# kubectl apply -f calico.yaml
configmap/calico-config created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
10.當前maste生成證書用於添加新控制節點
root@k8s-master1:~# kubeadm init phase upload-certs --upload-certs
I0607 19:41:57.980206 21674 version.go:254] remote version is much newer: v1.21.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ac019c228f4b7dc7d2358092eef57dbab00ab480670657f43d1438b97ef7a9b7
11.添加master節點
#添加master2
root@k8s-master2:~# kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760 \
--control-plane --certificate-key ac019c228f4b7dc7d2358092eef57dbab00ab480670657f43d1438b97ef7a9b7
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master2.example.local localhost] and IPs [10.0.0.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master2.example.local localhost] and IPs [10.0.0.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.neteagles.local] and IPs [10.200.0.1 10.0.0.102 10.0.0.188]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master2.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master2.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
#添加master3
root@k8s-master3:~# kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760 \
--control-plane --certificate-key ac019c228f4b7dc7d2358092eef57dbab00ab480670657f43d1438b97ef7a9b7
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master3.example.local localhost] and IPs [10.0.0.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master3.example.local localhost] and IPs [10.0.0.103 127.0.0.1 ::1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master3.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.neteagles.local] and IPs [10.200.0.1 10.0.0.103 10.0.0.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master3.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master3.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
12.添加node節點
#添加node1節點
root@k8s-node1:~# kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#添加node2節點
root@k8s-node2:~# kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#添加node3節點
root@k8s-node3:~# kubeadm join 10.0.0.188:6443 --token 9enh01.q35dsb6rdin4hbuj \
--discovery-token-ca-cert-hash sha256:e3bb6181f455ad018d2e13b4881ab35fc85332f4e020149aed7cac21023e2760
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13.驗證當前node狀態
root@k8s-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1.example.local Ready control-plane,master 37m v1.20.5
k8s-master2.example.local Ready control-plane,master 15m v1.20.5
k8s-master3.example.local Ready control-plane,master 14m v1.20.5
k8s-node1.example.local Ready <none> 12m v1.20.5
k8s-node2.example.local Ready <none> 11m v1.20.5
k8s-node3.example.local Ready <none> 108s v1.20.5
14.驗證當前證書狀態
root@k8s-master1:~# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-4fg8m 18m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:9enh01 Approved,Issued
csr-h8xk4 14m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:9enh01 Approved,Issued
csr-jbznh 15m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:9enh01 Approved,Issued
csr-qdvr2 4m20s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:9enh01 Approved,Issued
csr-tzf4p 17m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:9enh01 Approved,Issued
15.部署dashboard
#在所有master和 node運行這個腳本
#!/bin/bash
#
#********************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-05-10
#FileName: set_harbor_hostname_service.sh
#URL: www.cnblogs.com/neteagles
#Description: The test script
#Copyright (C): 2021 All rights reserved
#********************************************************************
HOSTNAME=harbor.neteagles.vip
echo "10.0.0.106 ${HOSTNAME}" >> /etc/hosts
sed -i.bak 's@ExecStart=.*@ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock --insecure-registry '${HOSTNAME}'@' /lib/systemd/system/docker.service
systemctl daemon-reload
systemctl restart docker
root@k8s-master1:~# docker pull kubernetesui/dashboard:v2.2.0
v2.2.0: Pulling from kubernetesui/dashboard
7cccffac5ec6: Pull complete
7f06704bb864: Pull complete
Digest: sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9
Status: Downloaded newer image for kubernetesui/dashboard:v2.2.0
docker.io/kubernetesui/dashboard:v2.2.0
harbor.neteagles.vip
先在這里新建一個倉庫hf
root@k8s-master1:~# docker login harbor.neteagles.vip
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
root@k8s-master1:~# docker tag kubernetesui/dashboard:v2.2.0 harbor.neteagles.vip/hf/dashboard:v2.2.0
root@k8s-master1:~# docker push harbor.neteagles.vip/hf/dashboard:v2.2.0
The push refers to repository [harbor.neteagles.vip/hf/dashboard]
8ba672b77b05: Pushed
77842fd4992b: Pushed
v2.2.0: digest: sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 size: 736
root@k8s-master1:~# docker pull kubernetesui/metrics-scraper:v1.0.6
v1.0.6: Pulling from kubernetesui/metrics-scraper
47a33a630fb7: Pull complete
62498b3018cb: Pull complete
Digest: sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.6
docker.io/kubernetesui/metrics-scraper:v1.0.6
root@k8s-master1:~# docker tag kubernetesui/metrics-scraper:v1.0.6 harbor.neteagles.vip/hf/metrics-scraper:v1.0.6
root@k8s-master1:~# docker push harbor.neteagles.vip/hf/metrics-scraper:v1.0.6
The push refers to repository [harbor.neteagles.vip/hf/metrics-scraper]
a652c34ae13a: Pushed
6de384dd3099: Pushed
v1.0.6: digest: sha256:c09adb7f46e1a9b5b0bde058713c5cb47e9e7f647d38a37027cd94ef558f0612 size: 736
root@k8s-master1:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
root@k8s-master1:~# vim recommended.yaml
#把下面內容
image: kubernetesui/dashboard:v2.2.0
image: kubernetesui/metrics-scraper:v1.0.6
#改成為
image: harbor.neteagles.vip/hf/dashboard:v2.2.0
image: harbor.neteagles.vip/hf/metrics-scraper:v1.0.6
:wq
root@k8s-master1:~# vim recommended.yaml
39 spec:
40 type: NodePort #添加這行
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30005 #添加這行
45 selector:
46 k8s-app: kubernetes-dashboard
:wq
root@k8s-master1:~# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
root@k8s-master1:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6dfcd885bf-grqw9 1/1 Running 2 63m
kube-system calico-node-9rc68 1/1 Running 1 46m
kube-system calico-node-9s5xq 1/1 Running 1 50m
kube-system calico-node-9x8sz 1/1 Running 1 47m
kube-system calico-node-nhh9v 1/1 Running 2 63m
kube-system calico-node-qn7gv 1/1 Running 1 49m
kube-system calico-node-r6ndl 1/1 Running 1 36m
kube-system coredns-54d67798b7-9dz9c 1/1 Running 2 72m
kube-system coredns-54d67798b7-d4rpn 1/1 Running 2 72m
kube-system etcd-k8s-master1.example.local 1/1 Running 2 72m
kube-system etcd-k8s-master2.example.local 1/1 Running 1 50m
kube-system etcd-k8s-master3.example.local 1/1 Running 1 48m
kube-system kube-apiserver-k8s-master1.example.local 1/1 Running 3 72m
kube-system kube-apiserver-k8s-master2.example.local 1/1 Running 1 50m
kube-system kube-apiserver-k8s-master3.example.local 1/1 Running 1 48m
kube-system kube-controller-manager-k8s-master1.example.local 1/1 Running 3 72m
kube-system kube-controller-manager-k8s-master2.example.local 1/1 Running 1 50m
kube-system kube-controller-manager-k8s-master3.example.local 1/1 Running 1 48m
kube-system kube-proxy-27pt2 1/1 Running 1 50m
kube-system kube-proxy-5tx7f 1/1 Running 1 47m
kube-system kube-proxy-frzf4 1/1 Running 1 46m
kube-system kube-proxy-qxzff 1/1 Running 2 49m
kube-system kube-proxy-stvzj 1/1 Running 1 36m
kube-system kube-proxy-w7pjr 1/1 Running 3 72m
kube-system kube-scheduler-k8s-master1.example.local 1/1 Running 4 72m
kube-system kube-scheduler-k8s-master2.example.local 1/1 Running 1 50m
kube-system kube-scheduler-k8s-master3.example.local 1/1 Running 1 48m
kubernetes-dashboard dashboard-metrics-scraper-ccf6d4787-vft8f 1/1 Running 0 36s
kubernetes-dashboard kubernetes-dashboard-76ffb47755-n4pvv 1/1 Running 0 36s
root@k8s-master1:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:44257 0.0.0.0:*
LISTEN 0 128 127.0.0.1:10248 0.0.0.0:*
LISTEN 0 128 127.0.0.1:10249 0.0.0.0:*
LISTEN 0 128 127.0.0.1:9099 0.0.0.0:*
LISTEN 0 128 127.0.0.1:2379 0.0.0.0:*
LISTEN 0 128 10.0.0.101:2379 0.0.0.0:*
LISTEN 0 128 10.0.0.101:2380 0.0.0.0:*
LISTEN 0 128 127.0.0.1:2381 0.0.0.0:*
LISTEN 0 128 127.0.0.1:10257 0.0.0.0:*
LISTEN 0 128 127.0.0.1:10259 0.0.0.0:*
LISTEN 0 8 0.0.0.0:179 0.0.0.0:*
LISTEN 0 128 0.0.0.0:30005 0.0.0.0:* #每個節點上都有30005端口
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 127.0.0.1:6010 0.0.0.0:*
LISTEN 0 128 *:10250 *:*
LISTEN 0 128 *:6443 *:*
LISTEN 0 128 *:10256 *:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::1]:6010 [::]:*
16.訪問dashboard
10.0.0.108:30005
提示使用https登錄
獲取登錄token
root@k8s-master1:~# mkdir dashboard-2.2.0
root@k8s-master1:~# mv recommended.yaml dashboard-2.2.0/
root@k8s-master1:~# cd dashboard-2.2.0/
root@k8s-master1:~/dashboard-2.2.0# vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
:wq
root@k8s-master1:~/dashboard-2.2.0# kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@k8s-master1:~/dashboard-2.2.0# kubectl get secret -A |grep admin
kubernetes-dashboard admin-user-token-sfsnw kubernetes.io/service-account-token 3 38s
root@k8s-master1:~/dashboard-2.2.0# kubectl describe secret admin-user-token-sfsnw -n kubernetes-dashboard
Name: admin-user-token-sfsnw
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 54ff5070-778b-490e-83f8-4e27f63919f6
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkVtZE1yT2puOGtDbmxpRTk5ejF5YTJ1TDVNbTJJUjhtWUs1Wm9feWQyTDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXNmc253Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NGZmNTA3MC03NzhiLTQ5MGUtODNmOC00ZTI3ZjYzOTE5ZjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.UteOOUZJ-K6dgkL-BQaxQUfTboLOlEKLqbWDVJwLD6-5WZo6tC4wq1Ovn-Zu5xwIYXr3lm1i9_OrS0rkWVsv5Iof1bXarMfLuD-PbdCAwX43zOTm91nKvPEI1MEbnVmcz5AlHLM6Q4t4SklHQ8v6iH5pGKESTcXRCBTm3WzuHBCf0sUzhLos4nQY10LvaKAs3BU31INY04IxvvV7wph8sX5bomWFjBjYgpYNkXhWx5BL8QvELvASRfzDSFIX79_IrjD6Jk4H2CQRb_ndaj34oUNtwu5V6254fbF0jntQGMEcqSWon-uKlBtgmJefxlD9ptYspvBXv1aSNg7WmRwg-Q
使用kubeconfig文件登錄dashboard
root@k8s-master1:~/dashboard-2.2.0# cd
root@k8s-master1:~# cp .kube/config /opt/kubeconfig
root@k8s-master1:~# vim /opt/kubeconfig
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkVtZE1yT2puOGtDbmxpRTk5ejF5YTJ1TDVNbTJJUjhtWUs1Wm9feWQyTDAifQ.eyJpc3MiOiJrdWJlcm
5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1Ym
VybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXNmc253Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3
VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOi
I1NGZmNTA3MC03NzhiLTQ5MGUtODNmOC00ZTI3ZjYzOTE5ZjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YW
RtaW4tdXNlciJ9.UteOOUZJ-K6dgkL-BQaxQUfTboLOlEKLqbWDVJwLD6-5WZo6tC4wq1Ovn-Zu5xwIYXr3lm1i9_OrS0rkWVsv5Iof1bXarMfLuD-PbdCAw
X43zOTm91nKvPEI1MEbnVmcz5AlHLM6Q4t4SklHQ8v6iH5pGKESTcXRCBTm3WzuHBCf0sUzhLos4nQY10LvaKAs3BU31INY04IxvvV7wph8sX5bomWFjBjYg
pYNkXhWx5BL8QvELvASRfzDSFIX79_IrjD6Jk4H2CQRb_ndaj34oUNtwu5V6254fbF0jntQGMEcqSWon-uKlBtgmJefxlD9ptYspvBXv1aSNg7WmRwg-Q
:wq
17.k8s升級
#各master安裝指定新版本kubeadm
root@k8s-master1:~# apt-cache madison kubeadm
kubeadm | 1.21.1-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.21.0-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.7-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.6-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.5-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.4-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.2-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.1-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.0-00 | https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
root@k8s-master1:~# apt -y install kubeadm=1.20.7-00
root@k8s-master2:~# apt -y install kubeadm=1.20.7-00
root@k8s-master3:~# apt -y install kubeadm=1.20.7-00
root@k8s-master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:38:16Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
#查看升級計划
root@k8s-master1:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.5
[upgrade/versions] kubeadm version: v1.20.6
I0607 21:20:03.532587 130121 version.go:254] remote version is much newer: v1.21.1; falling back to: stable-1.20
[upgrade/versions] Latest stable version: v1.20.7
[upgrade/versions] Latest stable version: v1.20.7
[upgrade/versions] Latest version in the v1.20 series: v1.20.7
[upgrade/versions] Latest version in the v1.20 series: v1.20.7
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 6 x v1.20.5 v1.20.7
Upgrade to the latest version in the v1.20 series:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.20.5 v1.20.7
kube-controller-manager v1.20.5 v1.20.7
kube-scheduler v1.20.5 v1.20.7
kube-proxy v1.20.5 v1.20.7
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.7
Note: Before you can perform this upgrade, you have to update kubeadm to v1.20.7.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
#下載鏡像
root@k8s-master1:~# cat images-download.sh
#!/bin/bash
#
#********************************************************************
#Author: zhanghui
#QQ: 19661891
#Date: 2021-05-10
#FileName: images-download.sh
#URL: www.neteagles.cn
#Description: The test script
#Copyright (C): 2021 All rights reserved
#********************************************************************
VERSION=1.20.7
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v${VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
root@k8s-master1:~# bash images-download.sh
root@k8s-master2:~# bash images-download.sh
root@k8s-master3:~# bash images-download.sh
#執行版本升級
root@k8s-master1:~# kubeadm upgrade apply v1.20.7
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.7"
[upgrade/versions] Cluster version: v1.20.5
[upgrade/versions] kubeadm version: v1.20.7
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.7"...
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master1.example.local hash: b491aa302a9b4293e984283616ebb67c
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests339310465"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-06-07-21-29-18/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: ad26a328138d748832b0a43ba8495500
Static pod: kube-apiserver-k8s-master1.example.local hash: 4c34c20ce20afa32efc5776d0a28c5f8
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-06-07-21-29-18/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 78b06d242e63b887f542659bb1eec04c
Static pod: kube-controller-manager-k8s-master1.example.local hash: 6e70903f49a8670803d28b973f6bed84
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-06-07-21-29-18/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 90ef142019e6b9a233debe4dccdac1db
Static pod: kube-scheduler-k8s-master1.example.local hash: 4f9a80c2c87c34612bce1a27cf2963ec
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.7". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@k8s-master2:~# kubeadm upgrade apply v1.20.7
root@k8s-master3:~# kubeadm upgrade apply v1.20.7
root@k8s-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1.example.local Ready control-plane,master 125m v1.20.5
k8s-master2.example.local Ready control-plane,master 103m v1.20.5
k8s-master3.example.local Ready control-plane,master 102m v1.20.5
k8s-node1.example.local Ready <none> 100m v1.20.5
k8s-node2.example.local Ready <none> 99m v1.20.5
k8s-node3.example.local Ready <none> 89m v1.20.5
#升級kubelet
root@k8s-master1:~# apt -y install kubelet=1.20.7-00
root@k8s-master2:~# apt -y install kubelet=1.20.7-00
root@k8s-master3:~# apt -y install kubelet=1.20.7-00
root@k8s-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1.example.local Ready control-plane,master 130m v1.20.7
k8s-master2.example.local Ready control-plane,master 108m v1.20.7
k8s-master3.example.local Ready control-plane,master 107m v1.20.7
k8s-node1.example.local Ready <none> 105m v1.20.5
k8s-node2.example.local Ready <none> 104m v1.20.5
k8s-node3.example.local Ready <none> 94m v1.20.5
#升級kubectl
root@k8s-master1:~# apt -y install kubectl=1.20.7-00
root@k8s-master2:~# apt -y install kubectl=1.20.7-00
root@k8s-master3:~# apt -y install kubectl=1.20.7-00
root@k8s-master1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:32:49Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
#升級k8s node節點版本
root@k8s-node1:~# apt -y install kubeadm=1.20.7-00
root@k8s-node2:~# apt -y install kubeadm=1.20.7-00
root@k8s-node2:~# apt -y install kubeadm=1.20.7-00
root@k8s-node1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:38:16Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-node1:~# apt -y install kubelet=1.20.7-00
root@k8s-node2:~# apt -y install kubelet=1.20.7-00
root@k8s-node3:~# apt -y install kubelet=1.20.7-00
root@k8s-master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1.example.local Ready control-plane,master 135m v1.20.7
k8s-master2.example.local Ready control-plane,master 113m v1.20.7
k8s-master3.example.local Ready control-plane,master 112m v1.20.7
k8s-node1.example.local Ready <none> 110m v1.20.7
k8s-node2.example.local Ready <none> 109m v1.20.7
k8s-node3.example.local Ready <none> 99m v1.20.7