Kubernetes架構
Kubernetes 是一個可移植的、可擴展的開源平台,用於管理容器化的工作負載和服務,可促進聲明式配置和自動化。Kubernetes 擁有一個龐大且快速增長的生態系統。Kubernetes 的服務、支持和工具廣泛可用。
一、簡介
Kubernetes 是一個可移植的、可擴展的開源平台,用於管理容器化的工作負載和服務,可促進聲明式配置和自動化。Kubernetes 擁有一個龐大且快速增長的生態系統。Kubernetes 的服務、支持和工具廣泛可用。 一、簡介Kubernetes是一個全新的基於容器技術的分布式領先方案。簡稱:K8S。它是Google開源的容器集群管理系統,它的設計靈感來自於Google內部的一個叫作Borg的容器管理系統。繼承了Google十余年的容器集群使用經驗。它為容器化的應用提供了部署運行、資源調度、服務發現和動態伸縮等一些列完整的功能,極大地提高了大規模容器集群管理的便捷性。
kubernetes是一個完備的分布式系統支撐平台。具有完備的集群管理能力,多擴多層次的安全防護和准入機制、多租戶應用支撐能力、透明的服務注冊和發現機制、內建智能負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制以及多粒度的資源配額管理能力。
在集群管理方面,Kubernetes將集群中的機器划分為一個Master節點和一群工作節點Node,其中,在Master節點運行着集群管理相關的一組進程kube-apiserver、kube-controller-manager和kube-scheduler,這些進程實現了整個集群的資源管理、Pod調度、彈性伸縮、安全控制、系統監控和糾錯等管理能力,並且都是全自動完成的。Node作為集群中的工作節點,運行真正的應用程序,在Node上Kubernetes管理的最小運行單元是Pod。Node上運行着Kubernetes的kubelet、kube-proxy服務進程,這些服務進程負責Pod的創建、啟動、監控、重啟、銷毀以及實現軟件模式的負載均衡器。
在Kubernetes集群中,它解決了傳統IT系統中服務擴容和升級的兩大難題。如果今天的軟件並不是特別復雜並且需要承載的峰值流量不是特別多,那么后端項目的部署其實也只需要在虛擬機上安裝一些簡單的依賴,將需要部署的項目編譯后運行就可以了。但是隨着軟件變得越來越復雜,一個完整的后端服務不再是單體服務,而是由多個職責和功能不同的服務組成,服務之間復雜的拓撲關系以及單機已經無法滿足的性能需求使得軟件的部署和運維工作變得非常復雜,這也就使得部署和運維大型集群變成了非常迫切的需求。
Kubernetes 的出現不僅主宰了容器編排的市場,更改變了過去的運維方式,不僅將開發與運維之間邊界變得更加模糊,而且讓 DevOps 這一角色變得更加清晰,每一個軟件工程師都可以通過 Kubernetes 來定義服務之間的拓撲關系、線上的節點個數、資源使用量並且能夠快速實現水平擴容、藍綠部署等在過去復雜的運維操作。
二、架構
Kubernetes 遵循非常傳統的客戶端服務端架構,客戶端通過 RESTful 接口或者直接使用 kubectl 與 Kubernetes 集群進行通信,這兩者在實際上並沒有太多的區別,后者也只是對 Kubernetes 提供的 RESTful API 進行封裝並提供出來。每一個 Kubernetes 集群都由一組 Master 節點和一系列的 Worker 節點組成,其中 Master 節點主要負責存儲集群的狀態並為 Kubernetes 對象分配和調度資源。
Master
它主要負責接收客戶端的請求,安排容器的執行並且運行控制循環,將集群的狀態向目標狀態進行遷移,Master 節點內部由三個組件構成:
-
API Server
負責處理來自用戶的請求,其主要作用就是對外提供 RESTful 的接口,包括用於查看集群狀態的讀請求以及改變集群狀態的寫請求,也是唯一一個與 etcd 集群通信的組件。
-
ControllerController
管理器運行了一系列的控制器進程,這些進程會按照用戶的期望狀態在后台不斷地調節整個集群中的對象,當服務的狀態發生了改變,控制器就會發現這個改變並且開始向目標狀態遷移。
-
SchedulerScheduler
調度器其實為 Kubernetes 中運行的 Pod 選擇部署的 Worker 節點,它會根據用戶的需要選擇最能滿足請求的節點來運行 Pod,它會在每次需要調度 Pod 時執行。
Node
Node節點實現相對簡單一點,主要是由kubelet和kube-proxy兩部分組成: kubelet 是一個節點上的主要服務,它周期性地從 API Server 接受新的或者修改的 Pod 規范並且保證節點上的 Pod 和其中容器的正常運行,還會保證節點會向目標狀態遷移,該節點仍然會向 Master 節點發送宿主機的健康狀況。 kube-proxy 負責宿主機的子網管理,同時也能將服務暴露給外部,其原理就是在多個隔離的網絡中把請求轉發給正確的 Pod 或者容器。
Kubernetes架構圖
在這張系統架構圖中,我們把服務分為運行在工作節點上的服務和組成集群級別控制板的服務。
Kubernetes主要由以下幾個核心組件組成:
-
etcd保存了整個集群的狀態
-
apiserver提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API注冊和發現等機制
-
controller manager負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等
-
scheduler負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上
-
kubelet負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理
-
Container runtime負責鏡像管理以及Pod和容器的真正運行(CRI)
-
kube-proxy負責為Service提供cluster內部的服務發現和負載均衡
除了核心組件,還有一些推薦的組件:
-
kube-dns負責為整個集群提供DNS服務
-
Ingress Controller為服務提供外網入口
-
Heapster提供資源監控
-
Dashboard提供GUIFederation提供跨可用區的集群
-
Fluentd-elasticsearch提供集群日志采集、存儲與查詢
三、安裝部署
Kubernetes有兩種方式,第一種是二進制的方式,可定制但是部署復雜容易出錯;第二種是kubeadm工具安裝,部署簡單,不可定制化。
環境初始化
在開始之前,我們需要集群所用到的所有機器進行初始化。
系統環境
軟件 | 版本 |
---|---|
CentOS | CentOS Linux release 7.7.1908 (Core) |
Docker | 19.03.12 |
kubernetes | v1.18.8 |
etcd | 3.3.24 |
flannel | v0.11.0 |
cfssl | |
kernel-lt | 4.4.233 |
kernel-lt-devel | 4.4.233 |
軟件規划
IP | 安裝軟件 |
---|---|
kubernetes-master-01 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,docker |
kubernetes-master-02 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,docker |
kubernetes-node-01 | kubelet,kube-proxy,etcd,docker |
kubernetes-node-02 | kubelet,kube-proxy,docker |
kubernetes-lb | harbor,docker |
同步時間
yum install -y ntpdate && ntpdate ntp.aliyun.com
crontab -e
*/5 * * * * ntpdate ntp.aliyun.com &> /dev/null
關閉防火牆
systemctl stop firewalld && systemctl disable firewalld && sed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0 && swapoff -a &&
sed -ir '/^SELINUX=/s/=.+/=disabled/' /etc/selinux/config
關閉swap分區
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
修改主機名
# Master 01節點
echo "kubernetes-master-01" > /etc/hostname
# Master 02節點
echo "kubernetes-master-02" > /etc/hostname
# Master 03節點
echo "kubernetes-master-03" > /etc/hostname
# Node 01節點
echo "kubernetes-node-01" > /etc/hostname
# Node 02節點
echo "kubernetes-node-02" > /etc/hostname
# 負載均衡 節點
echo "kubernetes-master-vip" > /etc/hostname
集群規划
主機名 | 配置 | IP | 內網 |
---|---|---|---|
kubernetes-master-01 | 2C2G | 172.16.0.20 | 172.16.1.20 |
kubernetes-master-02 | 2C2G | 172.16.0.21 | 172.16.1.21 |
kubernetes-node-01 | 2C2G | 172.16.0.22 | 172.16.1.22 |
kubernetes-node-02 | 2C2G | 172.16.0.23 | 172.16.1.23 |
kubernetes-lb | 2C2G | 172.16.0.24 | 172.16.1.24 |
配置HOSTS解析
cat >> /etc/hosts <<EOF
172.16.0.20 kubernetes-master-01
172.16.0.21 kubernetes-master-02
172.16.0.22 kubernetes-node-01
172.16.0.23 kubernetes-node-02
172.16.0.24 kubernetes-lb
EOF
集群各節點免密登錄
ssh-keygen -t rsa
for i in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02 kubernetes-lb ; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i ; done
集群部署docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
systemctl start docker
9.配置docker加速並修改驅動
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://jv632p22.mirror.aliyuncs.com"],
"insecure-registries": ["172.16.0.24:180"]
}
EOF
systemctl restart docker
二進制安裝
二進制安裝有利於集群定制化,可根據特定組件的負載進行進行自定制。
證書
kubernetes組件眾多,這些組件之間通過 HTTP/GRPC 相互通信,以協同完成集群中應用的部署和管理工作。尤其是master節點,更是掌握着整個集群的操作。其安全就變得尤為重要了,在目前世面上最安全的,使用最廣泛的就是數字證書。kubernetes正是使用這種認證方式。
安裝cfssl證書生成工具
本次我們使用cfssl證書生成工具,這是一款把預先的證書機構、使用期等時間寫在json文件里面會更加高效和自動化。cfssl采用go語言編寫,是一個開源的證書管理工具,
cfssljson
用來從cfssl程序獲取json輸出,並將證書,密鑰,csr
和bundle
寫入文件中。
- 下載
# 下載
cd /usr/local/bin
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 cfssljson
mv cfssl_linux-amd64 cfssl
# 設置執行權限
chmod +x cfssl*
master-vip 創建根證書
從整個架構來看,集群環境中最重要的部分就是etcd和API server。
所謂根證書,是CA認證中心與用戶建立信任關系的基礎,用戶的數字證書必須有一個受信任的根證書,用戶的數字證書才是有效的。
從技術上講,證書其實包含三部分,用戶的信息,用戶的公鑰,以及證書簽名。
CA負責數字證書的批審、發放、歸檔、撤銷等功能,CA頒發的數字證書擁有CA的數字簽名,所以除了CA自身,其他機構無法不被察覺的改動。
- 創建請求證書的json配置文件
mkdir -p /usr/local/bin/cert/ca && cd /usr/local/bin/cert/ca
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
default
是默認策略,指定證書默認有效期是1年
profiles
是定義使用場景,這里只是kubernetes,其實可以定義多個場景,分別指定不同的過期時間,使用場景等參數,后續簽名證書時使用某個profile;
signing
: 表示該證書可用於簽名其它證書,生成的ca.pem證書
server auth
: 表示client 可以用該CA 對server 提供的證書進行校驗;
client auth
: 表示server 可以用該CA 對client 提供的證書進行驗證。
- 創建根CA證書簽名請求文件
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai"
}]
}
EOF
C:國家
ST:省
L:城市
O:組織
OU:組織別名
- 生成證書
[root@kubernetes-master-vip ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/08/28 23:51:50 [INFO] generating a new CA key and certificate from CSR
2020/08/28 23:51:50 [INFO] generate received request
2020/08/28 23:51:50 [INFO] received CSR
2020/08/28 23:51:50 [INFO] generating key: rsa-2048
2020/08/28 23:51:50 [INFO] encoded CSR
2020/08/28 23:51:50 [INFO] signed certificate with serial number 66427391707536599498414068348802775591392574059
[root@kubernetes-master-vip ca]# ll
總用量 20
-rw-r--r-- 1 root root 282 8月 28 23:41 ca-config.json
-rw-r--r-- 1 root root 1013 8月 28 23:51 ca.csr
-rw-r--r-- 1 root root 196 8月 28 23:41 ca-csr.json
-rw------- 1 root root 1675 8月 28 23:51 ca-key.pem
-rw-r--r-- 1 root root 1334 8月 28 23:51 ca.pem
gencert
:生成新的key(密鑰)和簽名證書
--initca
:初始化一個新CA證書
部署Etcd集群
Etcd
是基於Raft
的分布式key-value存儲系統,由CoreOS團隊開發,常用於服務發現,共享配置,以及並發控制(如leader選舉,分布式鎖等等)。Kubernetes使用Etcd
進行狀態和數據存儲!
Etcd節點規划
Etcd名稱 | IP |
---|---|
etcd-01 | 172.16.0.20 |
etcd-02 | 172.16.0.21 |
etcd-03 | 172.16.0..22 |
創建Etcd證書
hosts
字段中IP為所有etcd節點的集群內部通信IP,有幾個etcd節點,就寫多少個IP。
mkdir -p /root/cert/etcd && cd /root/cert/etcd
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.16.0.20",
"172.16.0.21",
"172.16.0.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai"
}
]
}
EOF
生成證書
[root@kubernetes-master-vip etcd]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2020/08/29 00:02:20 [INFO] generate received request
2020/08/29 00:02:20 [INFO] received CSR
2020/08/29 00:02:20 [INFO] generating key: rsa-2048
2020/08/29 00:02:20 [INFO] encoded CSR
2020/08/29 00:02:20 [INFO] signed certificate with serial number 71348009702526539124716993806163559962125770315
[root@kubernetes-master-vip etcd]# ll
總用量 16
-rw-r--r-- 1 root root 1074 8月 29 00:02 etcd.csr
-rw-r--r-- 1 root root 352 8月 28 23:59 etcd-csr.json
-rw------- 1 root root 1675 8月 29 00:02 etcd-key.pem
-rw-r--r-- 1 root root 1460 8月 29 00:02 etcd.pem
gencert
: 生成新的key(密鑰)和簽名證書
-initca
:初始化一個新ca
-ca
:指明ca的證書
-ca-key
:指明ca的私鑰文件
-config
:指明請求證書的json文件
-profile
:與config
中的profile
對應,是指根據config
中的profile
段來生成證書的相關信息
分發證書至etcd服務器
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01
do
ssh root@${ip} "mkdir -p /etc/etcd/ssl"
scp /usr/local/bin/cert/ca/ca*.pem root@${ip}:/etc/etcd/ssl
scp ./server*.pem root@${ip}:/etc/etcd/ssl
done
部署etcd
tar xf etcd-v3.3.5-linux-amd64.tar.gz
for i in kubernetes-master-02 kubernetes-master-01 kubernetes-node-01
do
scp ./etcd-v3.3.5-linux-amd64/etcd* root@$i:/usr/local/bin/
done
[root@kubernetes-master-01 etcd-v3.3.24-linux-amd64]# etcd --version
etcd Version: 3.3.5
Git SHA: 70c872620
Go Version: go1.9.6
Go OS/Arch: linux/amd64
用systemd管理Etcd
mkdir -p /etc/kubernetes/conf/etcd
ETCD_NAME=`hostname`
INTERNAL_IP=`hostname -i`
INITIAL_CLUSTER=kubernetes-master-01=https://172.16.0.20:2380,kubernetes-master-02=https://172.16.0.21:2380,kubernetes-node-01=https://172.16.0.22:2380
cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/ssl/server.pem \\
--key-file=/etc/etcd/ssl/server-key.pem \\
--peer-cert-file=/etc/etcd/ssl/server.pem \\
--peer-key-file=/etc/etcd/ssl/server-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster \\
--initial-cluster ${INITIAL_CLUSTER} \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
配置項解釋
配置選項 | 選項說明 |
---|---|
name |
節點名稱 |
data-dir |
指定節點的數據存儲目錄 |
listen-peer-urls |
與集群其它成員之間的通信地址 |
listen-client-urls |
監聽本地端口,對外提供服務的地址 |
initial-advertise-peer-urls |
通告給集群其它節點,本地的對等URL地址 |
advertise-client-urls |
客戶端URL,用於通告集群的其余部分信息 |
initial-cluster |
集群中的所有信息節點 |
initial-cluster-token |
集群的token,整個集群中保持一致 |
initial-cluster-state |
初始化集群狀態,默認為new |
--cert-file |
客戶端與服務器之間TLS證書文件的路徑 |
--key-file |
客戶端與服務器之間TLS密鑰文件的路徑 |
--peer-cert-file |
對等服務器TLS證書文件的路徑 |
--peer-key-file |
對等服務器TLS密鑰文件的路徑 |
--trusted-ca-file |
簽名client證書的CA證書,用於驗證client證書 |
--peer-trusted-ca-file |
簽名對等服務器證書的CA證書。 |
測試Etcd集群
# 啟動etcd
systemctl start etcd.service
[root@kubernetes-node-01 ~]# netstat -ltnp
tcp 6 0 172.16.1.22:2379 0.0.0.0:* LISTEN 11508/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 11508/etcd
tcp 0 0 172.16.1.22:2380 0.0.0.0:* LISTEN 11508/etcd
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/server.pem \
--cert=/etc/etcd/ssl/server.pem \
--key=/etc/etcd/ssl/server-key.pem \
--endpoints="https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379" \
endpoint status --write-out='table'
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/server.pem \
--cert=/etc/etcd/ssl/server.pem \
--key=/etc/etcd/ssl/server-key.pem \
--endpoints="https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379" \
member list --write-out='table'
創建集群證書
Master節點是集群當中最為重要的一部分,組件眾多,部署也最為復雜。
Master節點規划
主機名(角色) | IP | 外網 |
---|---|---|
Kubernetes-master-01 | 172.16.1.20 | 10.0.0.20 |
Kubernetes-master-02 | 172.16.1.21 | 10.0.0.21 |
簽發kube-apiserver證書
- 創建kube-apiserver證書簽名配置
mkdir /root/cert/kube && cd /root/cert/kube
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.16.0.20",
"172.16.0.21",
"172.16.0.22",
"172.16.0.23",
"172.16.0.24",
"172.16.0.66",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
host
:localhost
地址 +master
部署節點的ip地址 +etcd
節點的部署地址 + 負載均衡指定的VIP(172.16.0.55
) +service ip
段的第一個合法地址(10.96.0.1) + k8s默認指定的一些地址
- 生成證書
[root@kubernetes-master-vip kube]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2020/08/29 12:29:41 [INFO] generate received request
2020/08/29 12:29:41 [INFO] received CSR
2020/08/29 12:29:41 [INFO] generating key: rsa-2048
2020/08/29 12:29:41 [INFO] encoded CSR
2020/08/29 12:29:41 [INFO] signed certificate with serial number 701177072439793091180552568331885323625122463841
簽發kube-controller-manager證書
- 創建kube-controller-manager證書簽名配置
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"172.16.0.20",
"172.16.0.21",
"172.16.0.22",
"172.16.0.23",
"172.16.0.24",
"172.16.0.66"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF
- 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2020/08/29 12:40:21 [INFO] generate received request
2020/08/29 12:40:21 [INFO] received CSR
2020/08/29 12:40:21 [INFO] generating key: rsa-2048
2020/08/29 12:40:22 [INFO] encoded CSR
2020/08/29 12:40:22 [INFO] signed certificate with serial number 464924254532468215049650676040995556458619239240
簽發kube-scheduler證書
- 創建kube-scheduler簽名配置
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"172.16.0.20",
"172.16.0.21",
"172.16.0.22",
"172.16.0.23",
"172.16.0.24",
"172.16.0.66"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF
- 創建證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2020/08/29 12:42:29 [INFO] generate received request
2020/08/29 12:42:29 [INFO] received CSR
2020/08/29 12:42:29 [INFO] generating key: rsa-2048
2020/08/29 12:42:29 [INFO] encoded CSR
2020/08/29 12:42:29 [INFO] signed certificate with serial number 420546069405900774170348492061478728854870171400
簽發kube-proxy證書
- 創建kube-proxy證書簽名配置
cat > kube-proxy-csr.json << EOF
{
"CN":"system:kube-proxy",
"hosts":[],
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"system:kube-proxy",
"OU":"System"
}
]
}
EOF
- 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2020/08/29 12:45:11 [INFO] generate received request
2020/08/29 12:45:11 [INFO] received CSR
2020/08/29 12:45:11 [INFO] generating key: rsa-2048
2020/08/29 12:45:11 [INFO] encoded CSR
2020/08/29 12:45:11 [INFO] signed certificate with serial number 39717174368771783903269928946823692124470234079
2020/08/29 12:45:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
簽發操作用戶證書
為了能讓集群客戶端工具安全的訪問集群,所以要為集群客戶端創建證書,使其具有所有的集群權限。
- 創建證書簽名配置
cat > admin-csr.json << EOF
{
"CN":"admin",
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"system:masters",
"OU":"System"
}
]
}
EOF
- 生成證書
[root@kubernetes-master-01 k8s]# cfssl gencert -ca=/usr/local/bin/cert/ca/ca.pem -ca-key=/usr/local/bin/cert/ca/ca-key.pem -config=/usr/local/bin/cert/ca/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2020/08/29 12:50:46 [INFO] generate received request
2020/08/29 12:50:46 [INFO] received CSR
2020/08/29 12:50:46 [INFO] generating key: rsa-2048
2020/08/29 12:50:46 [INFO] encoded CSR
2020/08/29 12:50:46 [INFO] signed certificate with serial number 247283053743606613190381870364866954196747322330
2020/08/29 12:50:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
頒發證書
Master節點所需證書:ca、kube-apiservver、kube-controller-manager、kube-scheduler、用戶證書、Etcd證書
Node節點證書:ca、用戶證書、kube-proxy證書
VIP節點:用戶證書
- 頒發Master節點證書
cd /root/cert/kube
for i in kubernetes-master-01 kubernetes-master-02; do
ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
scp /usr/local/bin/cert/ca/ca*.pem root@$i:/etc/kubernetes/ssl
scp ./* root@$i:/etc/kubernetes/ssl
done
- 頒發Node節點證書
cd /root/cert/kube
for i in kubernetes-node-01 kubernetes-node-02; do
ssh root@$i "mkdir -p /etc/kubernetes/ssl"
scp /usr/local/bin/cert/ca/ca*.pem root@$i:/etc/kubernetes/ssl
scp -pr ./{admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl
done
部署master節點
kubernetes現托管在github上面,我們所需要的安裝包可以在GitHub上下載。
下載二進制組件
# 下載server安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz
# 下載client安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-client-linux-amd64.tar.gz
# 下載Node安裝包
wget https://dl.k8s.io/v1.19.0/kubernetes-node-linux-amd64.tar.gz
[root@kubernetes-master-01 ~]# ll
-rw-r--r-- 1 root root 13237066 8月 29 02:51 kubernetes-client-linux-amd64.tar.gz
-rw-r--r-- 1 root root 97933232 8月 29 02:51 kubernetes-node-linux-amd64.tar.gz
-rw-r--r-- 1 root root 363943527 8月 29 02:51 kubernetes-server-linux-amd64.tar.
tar xf kubernetes-server-linux-amd64.tar.gz && mv kubernetes kub-server
tar xf kubernetes-node-linux-amd64.tar.gz && mv kubernetes kub-node
tar xf kubernetes-client-linux-amd64.tar.gz && mv kubernetes kub-client
分發組件
cd /root/kub-server/server/bin
[root@kubernetes-master-01 bin]# ll
-rwxr-xr-x. 1 root root 115245056 Sep 6 02:58 kube-apiserver
-rwxr-xr-x. 1 root root 107249664 Sep 6 02:58 kube-controller-manager
-rwxr-xr-x. 1 root root 43003904 Sep 6 02:58 kubectl
-rwxr-xr-x. 1 root root 42123264 Sep 6 02:58 kube-scheduler
[root@kubernetes-master-01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@172.16.0.21:/usr/local/bin/
[root@kubernetes-master-01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
kube-apiserver 100% 115MB 94.7MB/s 00:01
kube-controller-manager 100% 105MB 87.8MB/s 00:01
kube-scheduler 100% 41MB 88.2MB/s 00:00
kubectl 100% 42MB 95.7MB/s 00:00
kube-apiserver 100% 115MB 118.4MB/s 00:00
kube-controller-manager 100% 105MB 107.3MB/s 00:00
kube-scheduler 100% 41MB 119.9MB/s 00:00
kubectl 100% 42MB 86.0MB/s 00:00
kube-apiserver 100% 115MB 120.2MB/s 00:00
kube-controller-manager 100% 105MB 108.1MB/s 00:00
kube-scheduler 100% 41MB 102.4MB/s 00:00
kubectl 100% 42MB 124.3MB/s 00:00
配置TLS bootstrapping
TLS bootstrapping 是用來簡化管理員配置kubelet 與 apiserver 雙向加密通信的配置步驟的一種機制。當集群開啟了 TLS 認證后,每個節點的 kubelet 組件都要使用由 apiserver 使用的 CA 簽發的有效證書才能與 apiserver 通訊,此時如果有很多個節點都需要單獨簽署證書那將變得非常繁瑣且極易出錯,導致集群不穩。
TLS bootstrapping 功能就是讓 node節點上的kubelet組件先使用一個預定的低權限用戶連接到 apiserver,然后向 apiserver 申請證書,由 apiserver 動態簽署頒發到Node節點,實現證書簽署自動化。
- 生成TLS bootstrapping所需token
cd /etc/kubernetes/ssl
TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
cat > token.csv << EOF
${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@kubernetes-master-01 bin]# cat token.csv
30a41fd80b11b0f8be851dbfdb7ebfd3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
創建集群配置文件
在kubernetes中,我們需要創建一個配置文件,用來配置集群、用戶、命名空間及身份認證等信息。
創建kubelet-bootstrap.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.66:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubelet-bootstrap.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials "kubelet-bootstrap" \
--token=9ed5636d5addff5c17de1b928802a802 \
--kubeconfig=kubelet-bootstrap.kubeconfig
# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=kubelet-bootstrap.kubeconfig
# 配置默認上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
創建kube-controller-manager.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.66:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials "kube-controller-manager" \
--client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
--client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
--cluster=kubernetes \
--user="kube-controller-manager" \
--kubeconfig=kube-controller-manager.kubeconfig
# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
--certificate-authority
:驗證kube-apiserver
證書的根證書--client-certificate
、--client-key
:剛生成的kube-controller-manager
證書和私鑰,連接kube-apiserver
時使用--embed-certs=true
:將ca.pem
和kube-controller-manager
證書內容嵌入到生成的kubectl.kubeconfig
文件中(不加時,寫入的是證書文件路徑)
創建kube-scheduler.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.66:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials "kube-scheduler" \
--client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
--client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
--cluster=kubernetes \
--user="kube-scheduler" \
--kubeconfig=kube-scheduler.kubeconfig
# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
創建kube-proxy.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.66:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials "kube-proxy" \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
--cluster=kubernetes \
--user="kube-proxy" \
--kubeconfig=kube-proxy.kubeconfig
# 配置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
創建admin.kubeconfig文件
export KUBE_APISERVER="https://172.16.0.66:6443"
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=admin.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials "admin" \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--client-key=/etc/kubernetes/ssl/admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
# 設置上下文參數(在上下文參數中將集群參數和用戶參數關聯起來)
kubectl config set-context default \
--cluster=kubernetes \
--user="admin" \
--kubeconfig=admin.kubeconfig
# 配置默認上下文
kubectl config use-context default --kubeconfig=admin.kubeconfig
分發集群配置文件至Master節點
cd /etc/kubernetes/ssl/
token.csv
kube-scheduler.kubeconfig
kube-controller-manager.kubeconfig
admin.conf # mv admin.kubeconfig admin.conf
kube-proxy.kubeconfig
kubelet-bootstrap.kubeconfig
mkdir -p /etc/kubernetes/cfg
[root@kubernetes-master-01 ssl]# for i in kubernetes-master-01 kubernetes-master-02 ;
do
scp token.csv kube-scheduler.kubeconfig kube-controller-manager.kubeconfig admin.conf kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg;
done
token.csv 100% 84 662.0KB/s 00:00
kube-scheduler.kubeconfig 100% 6159 47.1MB/s 00:00
kube-controller-manager.kubeconfig 100% 6209 49.4MB/s 00:00
admin.conf 100% 6021 51.0MB/s 00:00
kube-proxy.kubeconfig 100% 6059 52.7MB/s 00:00
kubelet-bootstrap.kubeconfig 100% 1985 25.0MB/s 00:00
token.csv 100% 84 350.5KB/s 00:00
kube-scheduler.kubeconfig 100% 6159 20.0MB/s 00:00
kube-controller-manager.kubeconfig 100% 6209 20.7MB/s 00:00
admin.conf 100% 6021 23.4MB/s 00:00
kube-proxy.kubeconfig 100% 6059 20.0MB/s 00:00
kubelet-bootstrap.kubeconfig 100% 1985 4.4MB/s 00:00
token.csv 100% 84 411.0KB/s 00:00
kube-scheduler.kubeconfig 100% 6159 19.6MB/s 00:00
kube-controller-manager.kubeconfig 100% 6209 21.4MB/s 00:00
admin.conf 100% 6021 19.9MB/s 00:00
kube-proxy.kubeconfig 100% 6059 20.1MB/s 00:00
kubelet-bootstrap.kubeconfig 100% 1985 9.8MB/s 00:00
[root@kubernetes-master-01 ~]#
分發集群配置文件至Node節點
[root@kubernetes-master-01 ~]# for i in kubernetes-node-01 kubernetes-node-02;
do
ssh root@$i "mkdir -p /etc/kubernetes/cfg";
scp kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg;
done
kube-proxy.kubeconfig 100% 6059 18.9MB/s 00:00
kubelet-bootstrap.kubeconfig 100% 1985 8.1MB/s 00:00
kube-proxy.kubeconfig 100% 6059 16.2MB/s 00:00
kubelet-bootstrap.kubeconfig 100% 1985 9.9MB/s 00:00
[root@kubernetes-master-01 ~]#
部署kube-apiserver
- 創建kube-apiserver服務配置文件(三個節點都要執行,不能復制,注意api server IP)
KUBE_APISERVER_IP=`hostname -i`
cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--advertise-address=${KUBE_APISERVER_IP} \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=10-52767 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/etc/kubernetes/cfg/token.csv \\
--kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/etc/kubernetes/ssl/server.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log \\
--etcd-servers=https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/server.pem \\
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
配置選項 | 選項說明 |
---|---|
--logtostderr=false |
輸出日志到文件中,不輸出到標准錯誤控制台 |
--v=2 |
指定輸出日志的級別 |
--advertise-address |
向集群成員通知 apiserver 消息的 IP 地址 |
--etcd-servers |
連接的 etcd 服務器列表 |
--etcd-cafile |
用於etcd 通信的 SSL CA 文件 |
--etcd-certfile |
用於 etcd 通信的的 SSL 證書文件 |
--etcd-keyfile |
用於 etcd 通信的 SSL 密鑰文件 |
--service-cluster-ip-range |
Service網絡地址分配 |
--bind-address |
監聽 --seure-port 的 IP 地址,如果為空,則將使用所有接口(0.0.0.0 ) |
--secure-port=6443 |
用於監聽具有認證授權功能的 HTTPS 協議的端口,默認值是6443 |
--allow-privileged |
是否啟用授權功能 |
--service-node-port-range |
Service使用的端口范圍 |
--default-not-ready-toleration-seconds |
表示 notReady狀態的容忍度秒數 |
--default-unreachable-toleration-seconds |
表示 unreachable狀態的容忍度秒數: |
--max-mutating-requests-inflight=2000 |
在給定時間內進行中可變請求的最大數量,0 值表示沒有限制(默認值 200) |
--default-watch-cache-size=200 |
默認監視緩存大小,0 表示對於沒有設置默認監視大小的資源,將禁用監視緩存 |
--delete-collection-workers=2 |
用於 DeleteCollection 調用的工作者數量,這被用於加速 namespace 的清理( 默認值 1) |
--enable-admission-plugins |
資源限制的相關配置 |
--authorization-mode |
在安全端口上進行權限驗證的插件的順序列表,以逗號分隔的列表。 |
--enable-bootstrap-token-auth |
啟用此選項以允許 'kube-system' 命名空間中的 'bootstrap.kubernetes.io/token' 類型密鑰可以被用於 TLS 的啟動認證 |
--token-auth-file |
聲明bootstrap token 文件 |
--kubelet-certificate-authority |
證書 authority 的文件路徑 |
--kubelet-client-certificate |
用於 TLS 的客戶端證書文件路徑 |
--kubelet-client-key |
用於 TLS 的客戶端證書密鑰文件路徑 |
--tls-private-key-file |
包含匹配--tls-cert-file 的 x509 證書私鑰的文件 |
--service-account-key-file |
包含 PEM 加密的 x509 RSA 或 ECDSA 私鑰或公鑰的文件 |
--audit-log-maxage |
基於文件名中的時間戳,舊審計日志文件的最長保留天數 |
--audit-log-maxbackup |
舊審計日志文件的最大保留個數 |
--audit-log-maxsize |
審計日志被輪轉前的最大兆字節數 |
--audit-log-path |
如果設置,表示所有到apiserver的請求都會記錄到這個文件中,‘-’表示寫入標准輸出 |
- 創建kube-apiserver服務腳本
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
- 分發kube-apiserver服務腳本
for i in kubernetes-master-02 kubernetes-master-03;
do
scp /usr/lib/systemd/system/kube-apiserver.service root@$i:/usr/lib/systemd/system/kube-apiserver.service
done
- 啟動
systemctl daemon-reload ; systemctl enable --now kube-apiserver ; systemctl status kube-apiserver
-
kube-apiserver高可用部署
負載均衡器有很多種,這里我們采用官方推薦的
haproxy
+keepalived
。-
安裝
haproxy
和keeplived
(在三個master節點上安裝)yum install -y keepalived haproxy
-
配置
haproxy
服務cat > /etc/haproxy/haproxy.cfg <<EOF global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor listen stats bind *:8006 mode http stats enable stats hide-version stats uri /stats stats refresh 30s stats realm Haproxy\ Statistics stats auth admin:admin frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kubernetes-master-01 172.16.0.20:6443 check inter 2000 fall 2 rise 2 weight 100 server kubernetes-master-02 172.16.0.21:6443 check inter 2000 fall 2 rise 2 weight 100 EOF
-
分發配置至其他節點
for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03; do ssh root@$i "mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak" scp haproxy.cfg root@$i:/etc/haproxy/haproxy.cfg done
-
配置
keepalived
服務yum install -y keepalived mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_kubernetes { script "/etc/keepalived/check_kubernetes.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 172.16.0.20 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 172.16.0.66 } # track_script { # chk_kubernetes # } } EOF
-
分發
keepalived
配置文件ssh root@$i "mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak" scp /etc/keepalived/keepalived.conf root@10.0.0.21:/etc/keepalived/keepalived.conf done
-
配置kubernetes-master-02節點
sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf sed -i 's#172.16.0.20#172.16.0.21#g' /etc/keepalived/keepalived.conf sed -i 's#priority 100#priority 90#g' /etc/keepalived/keepalived.conf
-
-
配置健康檢查腳本
cat > /etc/keepalived/check_kubernetes.sh <<EOF #!/bin/bash function chech_kubernetes() { for ((i=0;i<5;i++));do apiserver_pid_id=$(pgrep kube-apiserver) if [[ ! -z $apiserver_pid_id ]];then return else sleep 2 fi apiserver_pid_id=0 done } # 1:running 0:stopped check_kubernetes if [[ $apiserver_pid_id -eq 0 ]];then /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF chmod +x /etc/keepalived/check_kubernetes.sh
-
啟動
keeplived
和haproxy
服務systemctl enable --now keepalived haproxy
-
-
授權TLS Bootrapping用戶請求
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
部署kube-controller-manager服務
Controller Manager作為集群內部的管理控制中心,負責集群內的Node、Pod副本、服務端點(Endpoint)、命名空間(Namespace)、服務賬號(ServiceAccount)、資源定額(ResourceQuota)的管理,當某個Node意外宕機時,Controller Manager會及時發現並執行自動化修復流程,確保集群始終處於預期的工作狀態。如果多個控制器管理器同時生效,則會有一致性問題,所以
kube-controller-manager
的高可用,只能是主備模式,而kubernetes集群是采用租賃鎖實現leader選舉,需要在啟動參數中加入--leader-elect=true
。
- 創建kube-controller-manager配置文件
cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--leader-elect=true \\
--cluster-name=kubernetes \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/12 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=10s \\
--horizontal-pod-autoscaler-use-rest-clients=true"
EOF
配置文件詳細解釋如下:
配置選項 | 選項意義 |
---|---|
--leader-elect |
高可用時啟用選舉功能。 |
--master |
通過本地非安全本地端口8080連接apiserver |
--bind-address |
監控地址 |
--allocate-node-cidrs |
是否應在node節點上分配和設置Pod的CIDR |
--cluster-cidr |
Controller Manager 在啟動時如果設置了--cluster-cidr 參數,防止不同的節點的CIDR地址發生沖突 |
--service-cluster-ip-range |
集群Services 的CIDR范圍 |
--cluster-signing-cert-file |
指定用於集群簽發的所有集群范圍內證書文件(根證書文件) |
--cluster-signing-key-file |
指定集群簽發證書的key |
--root-ca-file |
如果設置,該根證書權限將包含service acount 的toker secret,這必須是一個有效的PEM編碼CA 包 |
--service-account-private-key-file |
包含用於簽署service account token 的PEM編碼RSA或者ECDSA私鑰的文件名 |
--experimental-cluster-signing-duration |
證書簽發時間 |
- 配置腳本
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- 分發配置
scp /etc/kubernetes/cfg/kube-controller-manager.conf root@10.0.0.21:/etc/kubernetes/cfg
scp /usr/lib/systemd/system/kube-controller-manager.service root@10.0.0.21:/usr/lib/systemd/system/kube-controller-manager.service
root@10.0.0.21 "systemctl daemon-reload"
systemctl daemon-reload ; systemctl enable --now kube-apiserver ; systemctl status kube-apiserver.service
部署kube-scheduler服務
kube-scheduler是 Kubernetes 集群的默認調度器,並且是集群 控制面 的一部分。對每一個新創建的 Pod 或者是未被調度的 Pod,kube-scheduler 會過濾所有的node,然后選擇一個最優的 Node 去運行這個 Pod。kube-scheduler 調度器是一個策略豐富、拓撲感知、工作負載特定的功能,調度器顯著影響可用性、性能和容量。調度器需要考慮個人和集體的資源要求、服務質量要求、硬件/軟件/政策約束、親和力和反親和力規范、數據局部性、負載間干擾、完成期限等。工作負載特定的要求必要時將通過 API 暴露。
-
創建kube-scheduler配置文件
cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF KUBE_SCHEDULER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\ --leader-elect=true \\ --master=http://127.0.0.1:8080 \\ --bind-address=127.0.0.1 " EOF
-
創建啟動腳本
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
-
分配配置文件
scp /usr/lib/systemd/system/kube-scheduler.service root@${ip}:/usr/lib/systemd/system scp /etc/kubernetes/cfg/kube-scheduler.conf root@${ip}:/etc/kubernetes/cfg
-
啟動
systemctl daemon-reload ; systemctl enable --now kube-scheduler ; systemctl start kube-controller-manager.service ; systemctl status kube-controller-manager.service
查看集群Master節點狀態
至此,master所有節點均安裝完畢。
[root@kubernetes-master-01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
部署Node節點
Node節點主要負責提供應用運行環境,其最主要的組件就是
kube-proxy
和kubelet
。
分發工具包
cd /root/kub-node/node/bin/
[root@kubernetes-master-01 bin]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-node-02 kubernetes-node-01; do scp kubelet kube-proxy root@$i:/usr/local/bin/; done
kubelet 100% 108MB 120.2MB/s 00:00
kube-proxy 100% 37MB 98.1MB/s 00:00
kubelet 100% 108MB 117.4MB/s 00:00
for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-02 kubernetes-node-01; do echo $i; ssh root@$i "ls -lh /usr/local/bin"; done
配置kubelet服務
-
創建kubelet.conf配置文件
mkdir /var/log/kubernetes KUBE_HOSTNAME=`hostname` cat > /etc/kubernetes/cfg/kubelet.conf << EOF KUBELET_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --hostname-override=${KUBE_HOSTNAME} \\ --container-runtime=docker \\ --kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\ --config=/etc/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/etc/kubernetes/ssl \\ --image-pull-progress-deadline=15m \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2" EOF
配置文件解釋說明
配置選項 | 選項意義 |
---|---|
--hostname-override |
用來配置該節點在集群中顯示的主機名,kubelet設置了-–hostname-override 參數后,kube-proxy 也需要設置,否則會出現找不到Node的情況 |
--container-runtime |
指定容器運行時引擎 |
--kubeconfig |
kubelet 作為客戶端使用的kubeconfig認證文件,此文件是由kube-controller-mananger 自動生成的 |
--bootstrap-kubeconfig |
指定令牌認證文件 |
--config |
指定kubelet配置文件 |
--cert-dir |
設置kube-controller-manager 生成證書和私鑰的目錄 |
--image-pull-progress-deadline |
鏡像拉取進度最大時間,如果在這段時間拉取鏡像沒有任何進展,將取消拉取,默認:1m0s |
--pod-infra-container-image |
每個pod中的network/ipc 名稱空間容器將使用的鏡像 |
-
創建kubelet-config.conf配置文件
cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 172.16.0.20 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.96.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF
簡單說幾個比較重要的選項配置意義
配置選項 選項意義 address
kubelet 服務監聽的地址 port
kubelet 服務的端口,默認 10250
readOnlyPort
沒有認證/授權的只讀 kubelet 服務端口 ,設置為 0 表示禁用,默認10255 clusterDNS
DNS 服務器的IP地址列表 clusterDomain
集群域名, kubelet 將配置所有容器除了主機搜索域還將搜索當前域 -
創建kubelet啟動腳本
cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet After=docker.service [Service] EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
-
分發配置文件
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02; do scp /etc/kubernetes/cfg/{kubelet-config.yml,kubelet.conf} root@${ip}:/etc/kubernetes/cfg scp /usr/lib/systemd/system/kubelet.service root@${ip}:/usr/lib/systemd/system done
-
配置文件處理
# 修改kubernetes-master-02配置 sed -i 's#master-01#master-02#g' /etc/kubernetes/cfg/kubelet.conf sed -i 's#172.16.0.20#172.16.0.21#g' /etc/kubernetes/cfg/kubelet-config.yml # 修改kubernetes-node-01配置 sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kubelet.conf sed -i 's#172.16.0.20#172.16.0.22#g' /etc/kubernetes/cfg/kubelet-config.yml # 修改kubernetes-node-02配置 sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kubelet.conf sed -i 's#172.16.0.20#172.16.0.23#g' /etc/kubernetes/cfg/kubelet-config.yml
-
開啟 master01 的 kubelet
# 開啟 master01 systemctl daemon-reload;systemctl enable --now kubelet;systemctl status kubelet.service
##### 配置`kube-proxy`服務
> kube-proxy是Kubernetes的核心組件,部署在每個Node節點上,它是實現Kubernetes Service的通信與負載均衡機制的重要組件; kube-proxy負責為Pod創建代理服務,從apiserver獲取所有server信息,並根據server信息創建代理服務,實現server到Pod的請求路由和轉發,從而實現K8s層級的虛擬轉發網絡。
###### 創建kube-proxy配置文件
cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--config=/etc/kubernetes/cfg/kube-proxy-config.yml"
EOF
創建kube-proxy-config.yml配置文件
cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.16.0.20
healthzBindAddress: 172.16.0.20:10256
metricsBindAddress: 172.16.0.20:10249
clientConnection:
burst: 200
kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig
qps: 100
hostnameOverride: kubernetes-master-01
clusterCIDR: 10.96.0.0/16
enableProfiling: true
mode: "ipvs"
kubeProxyIPTablesConfiguration:
masqueradeAll: false
kubeProxyIPVSConfiguration:
scheduler: rr
excludeCIDRs: []
EOF
簡單說一下上面配置的選項意義
選項配置 | 選項意義 |
---|---|
clientConnection |
與kube-apiserver交互時的參數設置 |
burst: 200 |
臨時允許該事件記錄值超過qps設定值 |
kubeconfig |
kube-proxy 客戶端連接 kube-apiserver 的 kubeconfig 文件路徑設置 |
qps: 100 |
與kube-apiserver交互時的QPS,默認值5 |
bindAddress |
kube-proxy監聽地址 |
healthzBindAddress |
用於檢查服務的IP地址和端口 |
metricsBindAddress |
metrics服務的ip地址和端口。默認:127.0.0.1:10249 |
clusterCIDR |
kube-proxy 根據 --cluster-cidr 判斷集群內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT |
hostnameOverride |
參數值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創建任何 ipvs 規則; |
masqueradeAll |
如果使用純iptables代理,SNAT所有通過服務集群ip發送的通信 |
mode |
使用ipvs模式 |
scheduler |
當proxy為ipvs模式時,ipvs調度類型 |
創建kube-proxy
啟動腳本
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
分發配置文件
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02;
do
scp /etc/kubernetes/cfg/{kube-proxy-config.yml,kube-proxy.conf} root@${ip}:/etc/kubernetes/cfg/
scp /usr/lib/systemd/system/kube-proxy.service root@${ip}:/usr/lib/systemd/system/
done
修改節點配置文件
-
修改kubernetes-master-02節點
sed -i 's#172.16.0.20#172.16.0.21#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#master-02#g' /etc/kubernetes/cfg/kube-proxy-config.yml
-
修改kubernetes-node-01節點
sed -i 's#172.16.0.20#172.16.0.22#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-01#g' /etc/kubernetes/cfg/kube-proxy-config.yml
-
修改kubernetes-node-02節點
sed -i 's#172.16.0.20#172.16.0.23#g' /etc/kubernetes/cfg/kube-proxy-config.yml sed -i 's#master-01#node-02#g' /etc/kubernetes/cfg/kube-proxy-config.yml
-
配置查看
for ip in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02;do echo ''; echo $ip; echo ''; ssh root@$ip "cat /etc/kubernetes/cfg/kube-proxy-config.yml"; done
開機啟動
systemctl daemon-reload; systemctl enable --now kube-proxy; systemctl status kube-proxy
查看kubelet加入集群請求
systemctl restart kube-proxy kubelet.service kube-apiserver.service
[root@kubernetes-master-01 k8s]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-51i8zZdDrIFh_zGjblcnJHVTVEZF03-MRLmxqW7ubuk 50m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-9DyYdqmYto4MW7IcGbTPqVePH9PHQN1nNefZEFcab7s 50m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-YzbkiJCgLrXM2whs0h00TDceGaBI3Ntly8Z7HGCYvFw 62m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
批准加入
kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
查看節點
[root@kubernetes-master-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes-master-01 Ready <none> 123m v1.18.8
kubernetes-master-02 Ready <none> 120m v1.18.8
kubernetes-master-03 Ready <none> 118m v1.18.8
kubernetes-node-01 Ready <none> 3s v1.18.8
查看node節點生成的文件
[root@kubernetes-node-01 ~]# ll /etc/kubernetes/ssl/
總用量 36
-rw------- 1 root root 1679 8月 29 12:50 admin-key.pem
-rw-r--r-- 1 root root 1359 8月 29 12:50 admin.pem
-rw------- 1 root root 1679 8月 29 12:08 ca-key.pem
-rw-r--r-- 1 root root 1224 8月 29 12:08 ca.pem
-rw------- 1 root root 1191 8月 29 22:49 kubelet-client-2020-08-29-22-49-08.pem
lrwxrwxrwx 1 root root 58 8月 29 22:49 kubelet-client-current.pem -> /etc/kubernetes/ssl/kubelet-client-2020-08-29-22-49-08.pem
-rw-r--r-- 1 root root 2233 8月 29 20:02 kubelet.crt
-rw------- 1 root root 1675 8月 29 20:02 kubelet.key
-rw------- 1 root root 1679 8月 29 12:45 kube-proxy-key.pem
-rw-r--r-- 1 root root 1379 8月 29 12:45 kube-proxy.pem
集群角色
在K8S集群中,可分為master節點和node節點之分。
節點標簽
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master-
node/kubernetes-master-01 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master=kubernetes-master-01
node/kubernetes-master-01 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-02 node-role.kubernetes.io/master=kubernetes-master-02
node/kubernetes-master-02 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-node-01 node-role.kubernetes.io/node=kubernetes-node-01
node/kubernetes-master-03 labeled
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-node-02 node-role.kubernetes.io/node=kubernetes-master-01
node/kubernetes-node-01 labeled
[root@kubernetes-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master-01 Ready master 135m v1.18.8
kubernetes-master-02 Ready master 131m v1.18.8
kubernetes-master-03 Ready master 130m v1.18.8
kubernetes-node-01 Ready node 11m v1.18.8
刪除節點
[root@kubernetes-master-01 ~]# kubectl label nodes kubernetes-master-01 node-role.kubernetes.io/master-
node/kubernetes-master-01 labeled
為master節點打污點
master節點一般情況下不運行pod,因此我們需要給master節點添加污點使其不被調度。
[root@kubernetes-master-01 ~]# kubectl taint nodes kubernetes-master-01 node-role.kubernetes.io/master=kubernetes-master-01:NoSchedule --overwrite
node/kubernetes-master-01 modified
[root@kubernetes-master-01 ~]# kubectl taint nodes kubernetes-master-02 node-role.kubernetes.io/master=kubernetes-master-02:NoSchedule --overwrite
node/kubernetes-master-02 modified
部署網絡插件
kubernetes設計了網絡模型,但卻將它的實現交給了網絡插件,CNI網絡插件最主要的功能就是實現POD資源能夠跨主機進行通訊。常見的CNI網絡插件:
- Flannel
- Calico
- Canal
- Contiv
- OpenContrail
- NSX-T
- Kube-router
安裝網絡插件
#flanneld 下載地址:https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@kubernetes-master-01 ~]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02; do scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin; done
flanneld 100% 34MB 93.6MB/s 00:00
mk-docker-opts.sh 100% 2139 19.4MB/s 00:00
flanneld 100% 34MB 103.3MB/s 00:00
mk-docker-opts.sh 100% 2139 8.5MB/s 00:00
flanneld 100% 34MB 106.5MB/s 00:00
mk-docker-opts.sh 100% 2139 9.7MB/s 00:00
flanneld 100% 34MB 113.2MB/s 00:00
mk-docker-opts.sh 100% 2139 10.5MB/s 00:00
flanneld 100% 34MB 110.3MB/s 00:00
mk-docker-opts.sh 100% 2139 8.7MB/s 00:00
將Flanneld
配置寫入Etcd中
etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--endpoints="https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379" \
mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}'
# 使用get查看信息
etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--endpoints="https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379" \
get /coreos.com/network/config
[root@kubernetes-master-01 ~]#
需要注意的是
Network
部分,這里面填寫的是CLUSTER_IP
的地址,不是SERVIE_IP
的地址,這點要搞清楚
啟動Flannel
由於flannel服務在啟動的時候需要etcd證書才可以訪問集群,因此所有的節點都需要把etcd證書復制過去,無論是master節點還是node節點。
- 復制master節點的etcd證書至node節點
for i in kubernetes-node-01 kubernetes-node-02;do
ssh root@$i "mkdir -pv /etc/etcd/ssl"
scp -p /etc/etcd/ssl/*.pem root@$i:/etc/etcd/ssl
done
- 創建flanneld啟動腳本
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld address
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
-etcd-cafile=/etc/etcd/ssl/ca.pem \\
-etcd-certfile=/etc/etcd/ssl/server.pem \\
-etcd-keyfile=/etc/etcd/ssl/server-key.pem \\
-etcd-endpoints=https://172.16.0.20:2379,https://172.16.0.21:2379,https://172.16.0.22:2379 \\
-etcd-prefix=/coreos.com/network \\
-ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
配置詳解
配置選項 | 選項說明 |
---|---|
-etcd-cafile |
用於etcd 通信的 SSL CA 文件 |
-etcd-certfile |
用於 etcd 通信的的 SSL 證書文件 |
-etcd-keyfile |
用於 etcd 通信的 SSL 密鑰文件 |
--etcd-endpoints |
所有etcd的endpoints |
-etcd-prefix |
etcd中存儲的前綴 |
-ip-masq |
-ip-masq=true 如果設置為true,這個參數的目的是讓flannel進行ip偽裝,而不讓docker進行ip偽裝。這么做的原因是如果docker進行ip偽裝,流量再從flannel出去,其他host上看到的source ip 就是flannel 的網關ip,而不是docker容器的ip |
分發配置文件
[root@kubernetes-master-01 ~]# for i in kubernetes-master-02 kubernetes-node-01 kubernetes-node-02;do scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system; done
flanneld.service 100% 697 2.4MB/s 00:00
flanneld.service 100% 697 1.5MB/s 00:00
flanneld.service 100% 697 3.4MB/s 00:00
flanneld.service 100% 697 2.6MB/s 00:00
修改docker服務啟動腳本
[root@kubernetes-master-01 ~]# sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service
[root@kubernetes-master-01 ~]# sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
[root@kubernetes-master-01 ~]# sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service
分發
[root@kubernetes-master-01 ~]# for ip in kubernetes-master-02 kubernetes-node-01 kubernetes-node-02;do scp /usr/lib/systemd/system/docker.service root@${ip}:/usr/lib/systemd/system; done
docker.service 100% 1830 6.0MB/s 00:00
docker.service 100% 1830 4.7MB/s 00:00
docker.service 100% 1830 6.6MB/s 00:00
docker.service 100% 1830 7.3MB/s 00:00
重啟docker,開啟flanneld服務
[root@kubernetes-master-01 ~]# for i in kubernetes-master-01 kubernetes-master-02 kubernetes-node-01 kubernetes-node-02;do
echo ">>> $i"
ssh root@$i "systemctl daemon-reload"
ssh root@$i "systemctl start flanneld"
ssh root@$i "systemctl restart docker"
done
>>> kubernetes-master-01
>>> kubernetes-master-02
>>> kubernetes-master-03
>>> kubernetes-node-01
>>> kubernetes-node-02
部署CoreDNS解析插件
CoreDNS用於集群中Pod解析Service的名字,Kubernetes基於CoreDNS用於服務發現功能。
- 確認DNS
CLUSTER_DNS_IP="10.96.0.2"
- 綁定集群匿名用戶權限
[root@kubernetes-master-01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
- 構建CoreDNS
# 下載CoreDNS
yum install -y git
git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
[root@kubernetes-master-01 kubernetes]# ll
total 60
-rw-r--r--. 1 root root 4891 Sep 11 20:45 CoreDNS-k8s_version.md
-rw-r--r--. 1 root root 4250 Sep 11 20:45 coredns.yaml.sed
-rwxr-xr-x. 1 root root 3399 Sep 11 20:45 deploy.sh
-rw-r--r--. 1 root root 2706 Sep 11 20:45 README.md
-rwxr-xr-x. 1 root root 1336 Sep 11 20:45 rollback.sh
-rw-r--r--. 1 root root 7152 Sep 11 20:45 Scaling_CoreDNS.md
-rw-r--r--. 1 root root 7913 Sep 11 20:45 Upgrading_CoreDNS.md
# 生成安裝配置文件
替換coreDNS鏡像為:registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
vim /root/deployment/kubernetes/coredns.yaml.sed
containers:
- name: coredns
image: registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
[root@kubernetes-master-01 ~]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns unchanged
service/kube-dns created
[root@kubernetes-master-01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-85b4878f78-5xr2z 1/1 Running 0 2m31s
測試kubernetes集群
- 創建Nginx服務
[root@kubernetes-master-01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@kubernetes-master-01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@kubernetes-master-01 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h <none>
nginx NodePort 10.96.48.135 <none> 80:41649/TCP 7m17s app=nginx
[root@kubernetes-master-01 ]# kubectl delete svc nginx
- 測試
安裝 kubernetes-lb harbor
# 目錄准備
# /opt/src : 源碼、文件下載目錄
# /opt/release : 各個版本軟件存放位置
# /opt/apps : 各個軟件當前版本的軟鏈接
mkdir /opt/{apps,release,src} && cd /opt/src
wget https://github.com/goharbor/harbor/releases/download/v1.9.4/harbor-offline-installer-v1.9.4.tgz
mv harbor /opt/release/harbor-v1.9.4
ln -s /opt/release/harbor-v1.9.4 /opt/apps/harbor
ll /opt/apps/
total 0
lrwxrwxrwx 1 root root 26 Jan 5 11:13 harbor -> /opt/release/harbor-v1.9.4
# 修改配置文件
vim /opt/apps/harbor/harbor.yml
hostname: harbor.od.com
http:
port: 180
data_volume: /data/harbor
location: /data/harbor/logs
# 拉取鏡像
yum install -y docker-compose
cd /opt/apps/harbor/
./install.sh
......
✔ ----Harbor has been installed and started successfully.----
# 查看進程
docker-compose ps
# 設置harbor開機自啟
vim /etc/rc.d/rc.local # 增加以下內容
# start harbor
cd /opt/apps/harbor
/usr/bin/docker-compose stop
/usr/bin/docker-compose start
# 安裝nginx 反向代理harbor
yum install -y nginx
vim /etc/nginx/conf.d/harbor.conf
server {
listen 80;
server_name harbor.od.com;
# 避免出現上傳失敗的情況
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
#啟動
systemctl start nginx && systemctl enable nginx
推送鏡像到倉庫
# 所有登錄倉庫
root@kubernetes-lb ~]# docker login -u admin 172.16.0.24:180
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
# 推送測試
docker pull nginx
docker tga nginx harbor.od.com/public/nginx:v1
docker push harbor.od.com/public/nginx:v1
The push refers to repository [harbor.od.com/public/nginx]
5f70bf18a086: Pushed
e16a89738269: Pushed
latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938
# kubernetes-node-02拉取
[root@kubernetes-node-02 ~]# docker pull 172.16.0.24:180/public/nginx:v1
v1: Pulling from public/nginx
75f829a71a1c: Pull complete
787c6a77fe5a: Pull complete
407db99ebbd9: Pull complete
209075c821e1: Pull complete
0f3af6a43e25: Pull complete
017eac9a7274: Pull complete
1e399df083cf: Pull complete
b982e53244f9: Pull complete
8ad0c1867789: Pull complete
a52b0b0b237e: Pull complete
d835cb23f8e6: Pull complete
Digest: sha256:bbc05717cf78a13dc3d2654e10e117df51610b0747745d2e9033b0402389fe48
Status: Downloaded newer image for harbor.od.com/public/nginx:v1
harbor.od.com/public/nginxs:v1
[root@kubernetes-node-02 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.od.com/public/nginx v1 02cbdef580f2 39 minutes ago 556MB
registry.cn-hangzhou.aliyuncs.com/k8sos/pause 3.2 80d28bedfe5d 7 months ago 683kB
制作 CoreDNS 鏡像到harbor
root@kubernetes-master-01 kubernetes]# docker pull registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
1.7.0: Pulling from k8sos/coredns
c6568d217a00: Pull complete
6937ebe10f02: Pull complete
Digest: sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0
# 登錄倉庫
[root@kubernetes-master-01 kubernetes]# docker login -u admin 172.16.0.24:180
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
# 打包
[root@kubernetes-master-01 kubernetes]# docker tag registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:1.7.0 172.16.0.24:180/public/coredns:v1
[root@kubernetes-master-01 kubernetes]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.od.com/public/coredns v1 bfe3a36ebd25 2 months ago 45.2MB
registry.cn-hangzhou.aliyuncs.com/k8sos/coredns 1.7.0 bfe3a36ebd25 2 months ago 45.2MB
# 推送
[root@kubernetes-master-01 kubernetes]# docker push 172.16.0.24:180/public/coredns
The push refers to repository [harbor.od.com/public/coredns]
96d17b0b58a7: Pushed
225df95e717c: Pushed
v1: digest: sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a size: 739
k8s 認證harbor倉庫
# 創建secret
kubectl create secret docker-registry login --namespace=default \
--docker-server=172.16.0.24:180 \
--docker-username=admin \
--docker-password=l12345
# 查看
kubectl get secret
[root@kubernetes-master-01 ~]# kubectl get secret
NAME TYPE DATA AGE
default-token-tf2hx kubernetes.io/service-account-token 3 2d4h
login kubernetes.io/dockerconfigjson 1 20h
# 配置加速器
cd /root/deployment/kubernetes
vim coredns.yaml.sed
containers:
- name: coredns
image: 172.16.0.24:180/public/coredns:v1
# 構建
[root@kubernetes-master-01 kubernetes]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -
serviceaccount/coredns unchanged
clusterrole.rbac.authorization.k8s.io/system:coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns configured
service/kube-dns unchanged
[root@kubernetes-master-01 k
[root@kubernetes-master-01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6998469cf-hj2fn 1/1 Running 0 31s
# 構建nginx
vim nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
appname: nginx
spec:
replicas: 3
selector:
matchLabels:
appname: nginx
template:
metadata:
labels:
appname: nginx
spec:
containers:
- name: nginx
image: 172.16.0.24:180/public/nginx:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: login
[root@kubernetes-master-01 kubernetes]# kubectl create -f nginx.yaml
[root@kubernetes-master-01 kubernetes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns-bcd4589f-sc9rj 1/1 Running 0 21h 10.241.120.2 kubernetes-node-02 <none> <none>
nginx-deployment-7d7dffc6c7-jqp29 1/1 Running 0 31s 10.241.120.5 kubernetes-node-02 <none> <none>
nginx-deployment-7d7dffc6c7-lnpwx 1/1 Running 0 31s 10.241.0.2 kubernetes-node-01 <none> <none>
nginx-deployment-7d7dffc6c7-zxjzt 1/1 Running 0 31s 10.241.0.5 kubernetes-node-01 <none> <none>
[root@kubernetes-master-01 kubernetes]# ping 10.241.0.5
PING 10.241.0.5 (10.241.0.5) 56(84) bytes of data.
64 bytes from 10.241.0.5: icmp_seq=1 ttl=63 time=0.371 ms
[root@kubernetes-master-01 kubernetes]# ping 10.241.0.2
PING 10.241.0.2 (10.241.0.2) 56(84) bytes of data.
64 bytes from 10.241.0.2: icmp_seq=1 ttl=63 time=0.403 ms
[root@kubernetes-master-01 kubernetes]# ping 10.241.120.5
PING 10.241.120.5 (10.241.120.5) 56(84) bytes of data.
64 bytes from 10.241.120.5: icmp_seq=1 ttl=63 time=0.403 ms
部署 kubernetes Dashboard web頁面
# 拉取yaml文件
[root@kubernetes-master-01 kubernetes]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
#修改
[root@kubernetes-master-01 kubernetes]# cat recommended.yaml | grep image
image: kubernetesui/dashboard:v2.0.4
imagePullPolicy: Always
image: kubernetesui/metrics-scraper:v1.0.4
[root@kubernetes-master-01 kubernetes]# sed -i 's#kubernetesui/dashboard#registry.cn-[root@kubernetes-master-01 kubernetes]# sed -i 's#kubernetesui/metrics-scraper#registry.cn-hangzhou.aliyuncs.com/k8sos/metrics-scraper#g' recommended.yaml
[root@kubernetes-master-01 kubernetes]# cat recommended.yaml | grep image
image: registry.cn-hangzhou.aliyuncs.com/k8sos/dashboard:v2.0.4
imagePullPolicy: Always
image: registry.cn-hangzhou.aliyuncs.com/k8sos/metrics-scraper:v1.0.4
# 將
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
# 修改為
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
type: NodePort
# 創建 Dashboard
[root@kubernetes-master-01 kubernetes]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
# 查看狀態 kubernetes-dashboard
[root@kubernetes-master-01 kubernetes]# kubectl get pods -n kubernetes-dashboard -w
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7445d59dfd-nhgq7 1/1 Running 1 111m
kubernetes-dashboard-7d8466d688-jwhvx 1/1 Running 2 111m
[root@kubernetes-master-01 kubernetes]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.176.125 <none> 8000/TCP 21h
kubernetes-dashboard NodePort 10.96.253.150 <none> 443:3001/TCP 21h
[root@kubernetes-master-01 kubernetes]# kubectl get -f recommended.yaml
NAME STATUS AGE
namespace/kubernetes-dashboard Active 10s
NAME SECRETS AGE
serviceaccount/kubernetes-dashboard 1 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard NodePort 10.105.99.98 <none> 443:30001/TCP 10s
NAME TYPE DATA AGE
secret/kubernetes-dashboard-certs Opaque 0 10s
secret/kubernetes-dashboard-csrf Opaque 1 10s
secret/kubernetes-dashboard-key-holder Opaque 2 10s
NAME DATA AGE
configmap/kubernetes-dashboard-settings 0 10s
NAME CREATED AT
role.rbac.authorization.k8s.io/kubernetes-dashboard 2020-09-15T04:16:09Z
NAME CREATED AT
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard 2020-09-15T04:16:09Z
NAME ROLE AGE
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard Role/kubernetes-dashboard 10s
NAME ROLE AGE
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard ClusterRole/kubernetes-dashboard 10s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubernetes-dashboard 1/1 1 1 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.99.80.170 <none> 8000/TCP 10s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 10s
# 訪問 必須走https集群任意IP
# 默認https 推薦(火狐)
https://172.16.0.21:3001
# 使用 Token 登錄
[root@kubernetes-master-01 kubernetes]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# 創建
[root@kubernetes-master-01 kubernetes]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 取密鑰
[root@kubernetes-master-01 kubernetes]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-vm42c
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 950cd15b-f85a-444f-99b3-edb0de2f6d67
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1281 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlktLWNCa3ZBOXpZTzI5SVliN0Rwa3JGOWZLUjRRS0dCSzdHTTJ1aWMtdjAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZtNDJjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NTBjZDE1Yi1mODVhLTQ0NGYtOTliMy1lZGIwZGUyZjZkNjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.cW8UlAxdDI_z459FEt0H4dYDBUTjdFtaJgHcvDx6W-ORwwkyPFFHRKGl4jfRkq3TiaVwksokWcVav48fl2Pl2JXZ2IbQ-Z32WLZpDqx78_aBMbLT1JVcPhQY8xEIIsIOSf-n0ddulnACJflPHicWGTlIJcq5CGkOeFcUpx9mRhHDW70C-Ya67TsZxLxwbQISvoVomggB-92RDho6aZkbLEHZg57IuXovPjaoRPdn6txeVyQAAH3vyw5FvPpB70k-2hxoemI8T-pGKOCGumgRVLnymKKWnvQpTn8aKtxU7-Y_NcYI-dGFTWKGpJ0l7XxB52pl8Ct5aNXITUVN5U8jAA