一、Kubernetes介紹與特性
1.1、kubernetes是什么
官方網站:http://www.kubernetes.io
• Kubernetes是Google在2014年開源的一個容器集群管理系統,Kubernetes簡稱K8S。
• K8S用於容器化應用程序的部署,擴展和管理。
• K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。
• Kubernetes目標是讓部署容器化應用簡單高效。
1.2、kubernetes是什么
一個容器平台
一個微服務平台
便捷式雲平台
1.3、kubernetes特性
- 自我修復
在節點故障時重新啟動失敗的容器,替換和重新部署,保證預期的副本數量;殺死健康檢查失敗的容器,並且在未准備好之前不會處理客戶端請求,確保線上服務不中斷。
- 彈性伸縮
使用命令、UI或者基於CPU使用情況自動快速擴容和縮容應用程序實例,保證應用業務高峰並發時的高可用性;業務低峰時回收資源,以最小成本運行服務。
- 自動部署和回滾
K8S采用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除所有Pod,如果更新過程中出現問題,將回滾更改,確保升級不受影響業務。
- 服務發現和負載均衡
K8S為多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),並且負載均衡關聯的所有容器,使得用戶無需考慮容器IP問題。
- 機密和配置管理
管理機密數據和應用程序配置,而不需要把敏感數據暴露在鏡像里,提高敏感數據安全性。並可以將一些常用的配置存儲在K8S中,方便應用程序使用。
- 存儲編排
掛載外部存儲系統,無論是來自本地存儲,公有雲(如AWS),還是網絡存儲(如NFS、GlusterFS、Ceph)都作為集群資源的一部分使用,極大提高存儲使用靈活性。
- 批處理
提供一次性任務,定時任務;滿足批量數據處理和分析的場景。
二、kubernetes組織架構介紹
2.1、整體架構組件詳解
1、如圖,有三個節點一個master節點和兩個node節點。
2、Master有三個組件:
- API server:K8S提供的一個統一的入口,提供RESTful API訪問方式接口服務。
- Auth:認證授權,判斷是否有權限訪問
- Etcd:存儲的數據庫、存儲認證信息等,K8S狀態,節點信息等
- scheduler:集群的調度,將集群分配到哪個節點內
- controller manager: 控制器,來控制來做哪些任務,管理 pod service 控制器等
- Kubectl:管理工具,直接管理API Server,期間會有認證授權。
3、Node有兩個組件:
- kubelet:接收K8S下發的任務,管理容器創建,生命周期管理等,將一個pod轉換成一組容器。
- kube-proxy:Pod網絡代理,四層負載均衡,對外訪問
- 用戶 -> 防火牆 -> kube-proxy -> 業務
Pod:K8S最小單元
- Container:運行容器的環境,運行容器引擎
- Docker
2.2、集群管理流程及核心概念
1、 管理集群流程
2、Kubernetes核心概念
Pod
• 最小部署單元
• 一組容器的集合
• 一個Pod中的容器共享網絡命名空間
• Pod是短暫的
Controllers
• ReplicaSet : 確保預期的Pod副本數量
• Deployment : 無狀態應用部署
• StatefulSet : 有狀態應用部署
• DaemonSet : 確保所有Node運行同一個Pod
• Job : 一次性任務
• Cronjob : 定時任務
注:更高級層次對象,部署和管理Pod
Service
• 防止Pod失聯
• 定義一組Pod的訪問策略
Label : 標簽,附加到某個資源上,用於關聯對象、查詢和篩選
Namespaces : 命名空間,將對象邏輯上隔離
Annotations :注釋
三、Kubernetes 部署
- # K8S 相關服務包
- 百度雲下載:https://pan.baidu.com/s/1d1zqoil3pfeThC-v45bWkg
- 密碼:0ssx
3.1 服務版本及架構說明
服務版本
- centos:7.4
- etcd-v3.3.10
- flannel-v0.10.0
- kubernetes-1.12.1
- nginx-1.16.1
- keepalived-1.3.5
- docker-19.03.1
單Master架構
- k8s Master:172.16.105.220
- k8s Node:172.16.105.230、172.16.105.213
- etcd:172.16.105.220、172.16.105.230、172.16.105.213
雙Master+Nginx+Keepalived
- k8s Master1:192.168.1.108
- k8s Master2:192.168.1.109
- k8s Node3:192.168.1.110
- k8s Node4:192.168.1.111
- etc:192.168.1.108、192.168.1.109、192.168.1.110、192.168.1.111
- Nginx+keepalived1:192.168.1.112
- Nginx+keepalived2:192.168.1.113
- vip:192.168.1.100
3.2、部署kubernetes准備
1、關閉防火牆
systemctl stop firewalld.service
2、關閉SELINUX
setenforce 0
3、修改主機名
vim /etc/hostname
hostname ****
4、同步時間
ntpdate time.windows.com
5、環境變量
注:下面配置所有用到的k8s最好部署環境變量
3.3、Etcd 數據庫集群部署
1、部署 Etcd 自簽證書
1、創建k8s及證書目錄
mkdir ~/k8s && cd ~/k8s mkdir k8s-cert mkdir etcd-cert cd etcd-cert
2、安裝cfssl生成證書工具
# 通過選項生成證書 curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl # 通過json生成證書 curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson # 查看證書信息 curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo # 添加執行權限 chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
3、執行命令生成證書使用的json文件1

vim ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
4、執行命令生成證書使用的json文件2

{ "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] }
5、執行命令通過json文件生成CA根證書、會在當前目錄生成ca.pem和ca-key.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
6、執行命令生成Etcd域名證書、首先創建json文件后生成

{ "CN": "etcd", "hosts": [ "172.16.105.220", "172.16.105.230", "172.16.105.213" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] }
注:hosts下面跟etcd部署服務的IP。
7、執行命令辦法Etcd域名證書、當前目錄下生成 server.pem 與 server-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
8、查看創建的證書
ls *pem
ca-key.pem ca.pem server-key.pem server.pem
2、部署 Etcd 數據庫集群
- 使用etcd版本:etcd-v3.3.10-linux-amd64.tar.gz
- 二進制包下載地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
1、下載本地后進行解壓、進入到解壓目錄
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64
2、為了方便管理etcd創建幾個目錄、並移動文件
mkdir /opt/etcd/{cfg,bin,ssl} -p
mv etcd etcdctl /opt/etcd/bin/
3、創建編寫etcd配置文件

#[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.105.220:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.105.220:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.220:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.220:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

· ETCD_NAME 節點名稱
· ETCD_DATA_DIR 數據目錄
· ETCD_LISTEN_PEER_URLS 集群通信監聽地址
· ETCD_LISTEN_CLIENT_URLS 客戶端訪問監聽地址
· ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
· ETCD_ADVERTISE_CLIENT_URLS 客戶端通告地址
· ETCD_INITIAL_CLUSTER 集群節點地址
· ETCD_INITIAL_CLUSTER_TOKEN 集群Token
· ETCD_INITIAL_CLUSTER_STATE 加入集群的當前狀態,new是新集群,existing表示加入已有集群
4、創建systemd 管理 etcd

[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd ExecStart=/opt/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
5、將證書文件copy到指定目錄
cp /root/k8s/etcd-cert/{ca,ca-key,server-key,server}.pem /opt/etcd/ssl/
6、啟動 etcd、並設置開機自啟動
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
7、開啟后etcd可能會等待其他兩個節點等待,需要講其他兩個節點etcd開啟
# 1、將目錄etcd配置目錄 copy 到兩個節點內 scp -r /opt/etcd/ root@172.16.105.230:/opt/ scp -r /opt/etcd/ root@172.16.105.213:/opt/ # 2、將啟動服務配置文件 copy 到兩個節點內 scp -r /usr/lib/systemd/system/etcd.service root@172.16.105.230:/usr/lib/systemd/system/ scp -r /usr/lib/systemd/system/etcd.service root@172.16.105.213:/usr/lib/systemd/system/
8、修改 兩個節點 etcd /opt/etcd/cfg/etcd 配置文件

#[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.105.230:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.105.230:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.230:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.230:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

#[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.105.213:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.105.213:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.213:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.213:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
9、兩個節點啟動服務、並設置開機自啟動
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
10、查看主etcd日志

Aug 6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: updating the cluster version from 3.0 to 3.3 Aug 6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: updated the cluster version from 3.0 to 3.3 Aug 6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: enabled capabilities for version 3.3
11、查看端口啟動

tcp 0 0 172.16.105.220:2379 0.0.0.0:* LISTEN 13021/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 13021/etcd tcp 0 0 172.16.105.220:2380 0.0.0.0:* LISTEN 13021/etcd
12、查看進程使用

root 13021 1.1 1.4 10541908 28052 ? Ssl 11:13 0:02 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://172.16.105.220:2380 --listen-client-urls=https://172.16.105.220:2379,http://127.0.0.1:2379 --advertise-client-urls=https://172.16.105.220:2379 --initial-advertise-peer-urls=https://172.16.105.220:2380 --initial-cluster=etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
13、通過工具驗證etcd
# 添加證書文件絕對路徑與etcd集群節點地址 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" cluster-health

member 1d5fcc16a8c9361e is healthy: got healthy result from https://172.16.105.220:2379 member 7b28469233594fbd is healthy: got healthy result from https://172.16.105.230:2379 member b2e216e703023e21 is healthy: got healthy result from https://172.16.105.213:2379 cluster is healthy
其他:

# 刪除每個節點data文件重新啟動 rm -rf /var/lib/etcd/default.etcd
3.4、Node 部署 Docker 容器應用
1、安裝依賴包
yum install -y yum-utils device-mapper-persistent-data lvm2
2、配置官方源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3、安裝docker最新版
yum -y install docker-ce
4、配置docker倉庫加速器
官網:https://www.daocloud.io/mirror
加速命令:curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
5、重啟docker
systemctl restart docker
6、查看docker版本:docker version
Version: 19.03.1
3.5、Node 部署 Flannel 網絡模型
- 二進制包:https://github.com/coreos/flannel/releases
1、寫入分配的子網到etcd、提供flanneld使用。
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
2、查看創建網絡信息
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" get /coreos.com/network/config
3、下載完flannel包后進行解壓
tar -xvzf flannel-v0.10.0-linux-amd64.tar.gz
4、創建目錄將文件存放到指定目錄下
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
5、創建flanneld配置文件

FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
6、創建systemd管理flannel

[Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
7、配置Docker啟動指定網段

[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
8、啟動flannel與docker、設置開機自啟動
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker
9、確認 docker 與 flannel 再同網段

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.26.1 netmask 255.255.255.0 broadcast 172.17.26.255 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.26.0 netmask 255.255.255.255 broadcast 0.0.0.0
10、查看路由信息
# 1、查看生成的文件 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" ls /coreos.com/network/subnets/

/coreos.com/network/subnets/172.17.59.0-24 /coreos.com/network/subnets/172.17.23.0-24 /coreos.com/network/subnets/172.17.26.0-24
# 2、查看指定路由文件 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" get /coreos.com/network/subnets/172.17.59.0-24

# 對應關系 {"PublicIP":"172.16.105.220","BackendType":"vxlan","BackendData":{"VtepMAC":"ae:6b:20:4a:bd:ed"}}
3.6、部署 kubernetes 單Master集群
- 下載二進制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
- 下載這個包(kubernetes-server-linux-amd64.tar.gz)就夠了,包含了所需的所有組件。
1、生成證書
1.1、執行命令生成證書使用的json文件1

{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
1.2、執行命令生成證書使用的json文件2

{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] }
1.3、執行命令生成CA證書
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
1.4、執行命令生成證書使用的json文件、注:添加所有使用到k8s的節點IP。

{ "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "172.16.105.220", "172.16.105.210", "多選添加IP,Node節點不用添加", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] }
1.5、執行命令生成 apiserver 證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
1.6、執行命令生成證書使用的json文件生成 kube-proxy 證書

{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] }
1.7、執行命令生成 kube-proxy 證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
1.8、查看所有生成證書

ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
2、部署Master apiserver 組件
1、下載到k8s目錄解壓、進入目錄
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
2、創建目錄
mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p
3、將二進制文件導入到相應目錄下
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
4、將生成的證書文件存入到指定文件
cp ca.pem ca-key.pem server.pem server-key.pem /opt/kubernetes/ssl/
5、創建 token 文件

674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

第一列:隨機字符串,自己可生成
第二列:用戶名
第三列:UID
第四列:用戶組
6、創建 apiserver 配置文件、確保配置好生成證書,確保連接etcd

KUBE_APISERVER_OPTS="--logtostderr=false \ --log-dir=/opt/kubernetes/logs \ --v=4 \ --etcd-servers=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 \ --bind-address=172.16.105.220 \ --secure-port=6443 \ --advertise-address=172.16.105.220 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --service-node-port-range=30000-50000 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

參數說明: · --logtostderr 啟用日志 · ---v 日志等級 · --etcd-servers etcd集群地址 · --bind-address 監聽地址 · --secure-port https安全端口 · --advertise-address 集群通告地址 · --allow-privileged 啟用授權 · --service-cluster-ip-range Service虛擬IP地址段 · --enable-admission-plugins 准入控制模塊 · --authorization-mode 認證授權,啟用RBAC授權和節點自管理 · --enable-bootstrap-token-auth 啟用TLS bootstrap功能,后面會講到 · --token-auth-file token文件 · --service-node-port-range Service Node類型默認分配端口范圍 日志: # true 日志默認放到/var/log/messages --logtostderr=true # false 日志可以指定放到一個目錄 --logtostderr=false --log-dir=/opt/kubernetes/logs
7、創建 systemd 管理 apiserver

[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
8、啟動、並設置開機自啟動
systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
9、查看端口

tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 5431/kube-apiserver

tcp 0 0 172.16.105.220:6443 0.0.0.0:* LISTEN 5431/kube-apiserver
3、部署 Master scheduler 組件
1、創建 schduler 配置文件

KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect"

參數說明: · --master 連接本地apiserver · --leader-elect 當該組件啟動多個時,自動選舉(HA)
2、systemd管理schduler組件

[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3、啟動並設置開機自啟
systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
4、查看進程

root 8393 0.5 1.1 45360 21356 ? Ssl 11:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
4、部署 Master controller-manager 組件
1、創建 controller-manager 配置文件

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --experimental-cluster-signing-duration=87600h0m0s"
2、systemd管理controller-manager組件

[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3、啟動並添加開機自啟
systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
4、查看進程

root 8966 0.4 1.1 45360 20900 ? Ssl 11:27 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
5、通過 kubectl 檢查所有組件狀態

NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}
5、部署 kubecongig 文件
master 節點配置
1、將kubelet-bootstrap用戶綁定到系統集群角色。生成的token文件中定義的角色。
# 主要為kuelet辦法證書的最小全權限 /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
2、創建kubeconfig文件、在生成kubernetes證書的目錄下執行以下命令生成kubeconfig文件:

# 創建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc KUBE_APISERVER="https://172.16.105.220:6443" # 設置集群參數 kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 創建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/root/k8s/k8s-cert/kube-proxy.pem \ --client-key=/root/k8s/k8s-cert/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
3、執行腳本
bash kubeconfig.sh
4、將生成的kube-proxy.kubeconfig與bootstrap.kubeconfig copy 到 Node 機器內。
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.105.230:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.105.213:/opt/kubernetes/cfg/
6、部署Node kubelet 組件
1、Node節點創建目錄
mkdir -p /opt/kubernetes/{cfg,bin,logs,ssl}
2、copy下列文件到指定目錄下
- 使用:/kubernetes/server/bin/kubelet
- 使用:/kubernetes/server/bin/kube-proxy
- 將上面兩個文件copy到Node端/opt/kubernetes/bin/目錄下
3、創建 kubelet 配置文件

KUBELET_OPTS="--logtostderr=false \ --log-dir=/opt/kubernetes/logs/ \ --v=4 \ --hostname-override=172.16.105.213 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

參數說明: · --hostname-override 在集群中顯示的主機名 · --kubeconfig 指定kubeconfig文件位置,會自動生成 · --bootstrap-kubeconfig 指定剛才生成的bootstrap.kubeconfig文件 · --cert-dir 頒發證書存放位置 · --pod-infra-container-image 管理Pod網絡的鏡像
2、創建 kubelet.config 配置文件

kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 172.16.105.213 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
3、systemd 管理 kubelet 組件

[Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
4、啟動並設置開機自啟動
systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
5、查看進程

root 24607 0.8 1.7 626848 69140 ? Ssl 16:03 0:05 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs/ --v=4 --hostname-override=172.16.105.213 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
6、Master 端 審批Node 加入集群:
- 啟動后還沒加入到集群中,需要手動允許該節點才可以。
- 在Master節點查看請求簽名的Node:
7、查看請求加入集群的Node

NAME AGE REQUESTOR CONDITION node-csr-7ZHhg19mVh1w2gfJOh55eaBsRisA_wT8EHZQfqCLPLE 21s kubelet-bootstrap Pending node-csr-weeFsR6VVUNIHyohOgaGvy2Hr6M9qSUIkoGjQ_mUyOo 28s kubelet-bootstrap Pending
8、同意請求讓Node節點加入
kubectl certificate approve node-csr-7ZHhg19mVh1w2gfJOh55eaBsRisA_wT8EHZQfqCLPLE
kubectl certificate approve node-csr-weeFsR6VVUNIHyohOgaGvy2Hr6M9qSUIkoGjQ_mUyOo
9、查看加入節點

NAME STATUS ROLES AGE VERSION
172.16.105.213 Ready <none> 42s v1.12.1
172.16.105.230 Ready <none> 57s v1.12.1
7、部署Node kube-proxy組件
1、創建 kube-proxy 配置文件

KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.16.105.213 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
2、systemd管理kube-proxy組件

[Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3、啟動並設置開機自啟動
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy
4、查看進程

root 27166 0.3 0.5 41588 21332 ? Ssl 16:16 0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.16.105.213 --cluster-cidr=10.0.0.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
8、其他設置
1、解決:將匿名用戶綁定到系統用戶
kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous
3.7、部署 kubernetes 多Master集群
1、Master2配置部署
- 注:Master節點2配置與單Master相同下面我這里只直接略過相同配置。
- 注:直接復制配置文件可能會導致etcd鏈接問題
- 注:最好以master為etcd端。
1、修改Master02配置文件中的IP,更改為Master02IP

--bind-address=172.16.105.212 --advertise-address=172.16.105.212
2、啟動Master02 k8s
systemctl start kube-apiserver systemctl start kube-scheduler systemctl start kube-controller-manager
3、查看集群狀態

NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
5、查看etcd連接狀態

NAME STATUS ROLES AGE VERSION
172.16.105.213 Ready <none> 41h v1.12.1
172.16.105.230 Ready <none> 41h v1.12.1
2、部署 Nginx 負載均衡
- 注:保證系統時間統一證書正常使用
- nginx官網:http://www.nginx.org
- documentation --> Installing nginx --> packages
1、復制nginx官方源寫入到/etc/yum.repos.d/nginx.repo、修該centos版本

[nginx-stable] name=nginx stable repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=1 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ gpgcheck=1 enabled=0 gpgkey=https://nginx.org/keys/nginx_signing.key
2、從新加載yum
yum clean all
yum makecache
3、安裝 nginx
yum install nginx -y
4、修該配置文件,events同級添加

events { worker_connections 1024; } stream { log_format main "$remote_addr $upstream_addr - $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 172.16.105.220:6443; server 172.16.105.210:6443; } server { listen 172.16.105.231:6443; proxy_pass k8s-apiserver; } }

參數說明: # 創建四層負載均衡 stream { # 記錄日志 log_format main "$remote_addr $upstream_addr $time_local $status" # 日志存放路徑 access_log /var/log/nginx/k8s-access.log main; # 創建調度集群 k8s-apiserver 為服務名稱 upstream k8s-apiserver { server 172.16.105.220:6443; server 172.16.105.210:6443; } # 創建監聽服務 server { # 本地監聽訪問開啟的使用IP與端口 listen 172.16.105.231:6443; # 調度的服務名稱,由於是4層則不是用http proxy_pass k8s-apiserver; } }
5、啟動nginx並生效配置文件
systemctl start nginx
6、查看監聽端口

tcp 0 0 172.16.105.231:6443 0.0.0.0:* LISTEN 19067/nginx: master
8、修改每個Node 節點中配置文件。將引用的連接端,改為該負載均衡的機器內。
vim bootstrap.kubeconfig server: https://172.16.105.231:6443 vim kubelet.kubeconfig server: https://172.16.105.231:6443 vim kube-proxy.kubeconfig server: https://172.16.105.231:6443
9、重啟 kubelet Node 客戶端
systemctl restart kubelet
systemctl restart kube-proxy
10、查看Node 啟動進程

root 23226 0.0 0.4 300552 16460 ? Ssl Aug08 0:25 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem root 26986 1.5 1.5 632676 60740 ? Ssl 11:30 0:01 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs/ --v=4 --hostname-override=172.16.105.213 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 27584 0.7 0.5 41588 19896 ? Ssl 11:32 0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.16.105.213 --cluster-cidr=10.0.0.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
11、重啟Master kube-apiserver
systemctl restart kube-apiserver
12、查看Nginx日志

172.16.105.213 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200 172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200 172.16.105.213 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200 172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200 172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:35:00 +0800 200
3、部署 Nginx2+keepalived 高可用
- 注:VIP 要設置為證書授權過得ip否則會無法通過外網訪問
- 注:安裝Nginx2與單Nginx的安裝步驟相同,這里我不再重復部署,只講解重點。
1、Nginx1與Nginx2安裝keepalive高可用
yum -y install keepalived
2、修改Nginx1 Master 主配置文件

! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } # 通過vrrp協議檢查本機nginx服務是否正常 vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 100 # 優先級,備服務器設置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認1秒 # 密碼認證 authentication { auth_type PASS auth_pass 1111 } # VIP virtual_ipaddress { 192.168.1.100/24 } # 使用檢查腳本 track_script { check_nginx } }
3、修改Nginx2 Slave 主配置文件

! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } # 通過vrrp協議檢查本機nginx服務是否正常 vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 90 # 優先級,備服務器設置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認1秒 # 密碼認證 authentication { auth_type PASS auth_pass 1111 } # VIP virtual_ipaddress { 192.168.1.100/24 } # 使用檢查腳本 track_script { check_nginx } }
4、Ngin1與Nginx2創建檢查腳本

# 檢查nginx進程數 count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi
5、給腳本添加權限
chmod +x /etc/keepalived/check_nginx.sh
6、Ngin1與Nginx2啟動keepalived
systemctl start keepalived
7、查看進程

root 1969 0.0 0.1 118608 1396 ? Ss 09:41 0:00 /usr/sbin/keepalived -D root 1970 0.0 0.2 120732 2832 ? S 09:41 0:00 /usr/sbin/keepalived -D root 1971 0.0 0.2 120732 2380 ? S 09:41 0:00 /usr/sbin/keepalived -D
8、Master 查看虛擬IP

ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:3d:1c:d0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.115/24 brd 192.168.1.255 scope global dynamic ens32 valid_lft 5015sec preferred_lft 5015sec inet 192.168.1.100/24 scope global secondary ens32 valid_lft forever preferred_lft forever inet6 fe80::4db8:8591:9f94:8837/64 scope link valid_lft forever preferred_lft forever
9、Slave 6、查看虛擬IP(沒有就正常)

ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:09:b3:c4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.112/24 brd 192.168.1.255 scope global dynamic ens32 valid_lft 7200sec preferred_lft 7200sec inet6 fe80::1dbe:11ff:f093:ef49/64 scope link valid_lft forever preferred_lft forever
10、測試

測試IP飄逸 1、關閉Master Nginx1 pkill nginx 2、查看Slave Nginx2 虛擬IP是否飄逸 ip addr 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:09:b3:c4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.112/24 brd 192.168.1.255 scope global dynamic ens32 valid_lft 4387sec preferred_lft 4387sec inet 192.168.1.100/24 scope global secondary ens32 valid_lft forever preferred_lft forever inet6 fe80::1dbe:11ff:f093:ef49/64 scope link valid_lft forever preferred_lft forever 3、啟動Master Nginx1 keepalived 測試ip飄回 systemctl start nginx systemctl start keepalived 4、查看Nginx1 vip ip addr 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:3d:1c:d0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.115/24 brd 192.168.1.255 scope global dynamic ens32 valid_lft 7010sec preferred_lft 7010sec inet 192.168.1.100/24 scope global secondary ens32 valid_lft forever preferred_lft forever inet6 fe80::4db8:8591:9f94:8837/64 scope link valid_lft forever preferred_lft forever
11、修改Nginx1 與 Nginx2 代理監聽

stream { log_format main "$remote_addr $upstream_addr - $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.1.108:6443; server 192.168.1.109:6443; } server { listen 0.0.0.0:6443; proxy_pass k8s-apiserver; } }
12、重啟nginx
systemctl restart nginx
13、接入K8S 修改所有Node配置文件IP為 VIP
1、修改配置文件
vim bootstrap.kubeconfig server: https://192.168.1.100:6443 vim kube-proxy.kubeconfig server: https://192.168.1.100:6443
2、重啟Node
systemctl restart kubelet
systemctl restart kube-proxy
3、查看Master nginx1 日志

192.168.1.111 192.168.1.108:6443 - 22/Aug/2019:11:02:36 +0800 200 192.168.1.111 192.168.1.109:6443 - 22/Aug/2019:11:02:36 +0800 200 192.168.1.110 192.168.1.108:6443 - 22/Aug/2019:11:02:36 +0800 200 192.168.1.110 192.168.1.109:6443 - 22/Aug/2019:11:02:36 +0800 200 192.168.1.111 192.168.1.108:6443 - 22/Aug/2019:11:02:37 +0800 200