主要參考 https://github.com/opsnull/follow-me-install-kubernetes-cluster
01.系統初始化和全局變量
添加 k8s 和 docker 賬戶
在每台機器上添加 k8s 賬戶,可以無密碼 sudo:
$ sudo useradd -m k8s
$ sudo visudo
$ sudo grep '%wheel.*NOPASSWD: ALL' /etc/sudoers
%wheel ALL=(ALL) NOPASSWD: ALL
$ sudo gpasswd -a k8s wheel
在每台機器上添加 docker 賬戶,將 k8s 賬戶添加到 docker 組中,同時配置 dockerd 參數:
$ sudo useradd -m docker
$ sudo gpasswd -a k8s docker
$ sudo mkdir -p /etc/docker/
$ cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],
"max-concurrent-downloads": 20
}
無密碼 ssh 登錄其它節點
ssh-copy-id root@docker86-18
ssh-copy-id root@docker86-21
ssh-copy-id root@docker86-91
ssh-copy-id root@docker86-9
ssh-copy-id k8s@docker86-155
ssh-copy-id k8s@docker86-18
ssh-copy-id root@docker86-21
ssh-copy-id root@docker86-91
ssh-copy-id root@docker86-9
source ./environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/k8s/bin && chown -R k8s /opt/k8s && mkdir -p /etc/kubernetes/cert &&chown -R k8s /etc/kubernetes && mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert && mkdir -p /var/lib/etcd && chown -R k8s /etc/etcd/cert"
scp environment.sh k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
定義全局變量
cat <<EOF >environment.sh
#!/usr/bin/bash
# 生成 EncryptionConfig 所需的加密 key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# 最好使用 當前未用的網段 來定義服務網段和 Pod 網段
# 服務網段,部署前路由不可達,部署后集群內路由可達(kube-proxy 和 ipvs 保證)
SERVICE_CIDR="10.69.0.0/16"
# Pod 網段,建議 /16 段地址,部署前路由不可達,部署后集群內路由可達(flanneld 保證)
CLUSTER_CIDR="170.22.0.0/16"
# 服務端口范圍 (NodePort Range)
export NODE_PORT_RANGE="10000-40000"
# 集群各機器 IP 數組
export NODE_IPS=(192.168.86.154 192.168.86.155 192.168.86.156 192.168.86.18 192.168.86.21 192.168.86.91 192.168.86.9)
# etcd節點
export ETCD_NODE_IPS=(192.168.86.154 192.168.86.155 192.168.86.156)
# 集群各 IP 對應的 主機名數組
export NODE_NAMES=(docker86-154 docker86-155 docker86-156 docker86-18 docker86-21 docker86-91 docker86-9)
# kube-apiserver 的 VIP(HA 組件 keepalived 發布的 IP)
export MASTER_VIP=192.168.86.214
# kube-apiserver VIP 地址(HA 組件 haproxy 監聽 8443 端口)
export KUBE_APISERVER="https://${MASTER_VIP}:8443"
# HA 節點,配置 VIP 的網絡接口名稱
export VIP_IF="em1"
# etcd 集群服務地址列表
export ETCD_ENDPOINTS="https://192.168.86.154:2379,https://192.168.86.155:2379,https://192.168.86.156:2379"
# etcd 集群間通信的 IP 和端口
export ETCD_NODES="docker86-154=https://192.168.86.154:2380,docker86-155=https://192.168.86.155:2380,docker86-156=https://192.168.86.156:2380"
# flanneld 網絡配置前綴
export FLANNEL_ETCD_PREFIX="/kubernetes/network"
# kubernetes 服務 IP (一般是 SERVICE_CIDR 中第一個IP)
export CLUSTER_KUBERNETES_SVC_IP="10.69.0.1"
# 集群 DNS 服務 IP (從 SERVICE_CIDR 中預分配)
export CLUSTER_DNS_SVC_IP="10.69.0.2"
# 集群 DNS 域名
export CLUSTER_DNS_DOMAIN="cluster.local."
# 將二進制目錄 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
EOF
然后,把全局變量定義腳本拷貝到所有節點的 /opt/k8s/bin 目錄:
source ./environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp environment.sh k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
CA證書
配置文件:
17520h 2年,最大2年
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "17520h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
ca證書簽名請求
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
- CN:
Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name),瀏覽器使用該字段驗證網站是否合法; - O:
Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group); - kube-apiserver 將提取的 User、Group 作為
RBAC授權的用戶標識;
生成 CA 證書和私鑰
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
將生成的 CA 證書、秘鑰文件、配置文件拷貝到所有節點的 /etc/kubernetes/cert 目錄下:
source /opt/k8s/bin/environment.sh # 導入 NODE_IPS 環境變量
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert && chown -R k8s /etc/kubernetes"
scp ca*.pem ca-config.json k8s@${node_ip}:/etc/kubernetes/cert
done
客戶端安裝
wget https://dl.k8s.io/v1.12.1/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
分發到所有使用 kubectl 的節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kubernetes/client/bin/kubectl k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
創建 admin 證書和私鑰
kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權。
kubectl 作為集群的管理工具,需要被授予最高權限。這里創建具有最高權限的 admin 證書。
創建證書簽名請求:
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "4Paradigm"
}
]
}
EOF
O 為 system:masters,kube-apiserver 收到該證書后將請求的 Group 設置為 system:masters;
預定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予所有 API的權限;
該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*
創建 kubeconfig 文件
kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;
source /opt/k8s/bin/environment.sh
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubectl.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=kubectl.kubeconfig
# 設置上下文參數
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig
# 設置默認上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
--certificate-authority:驗證 kube-apiserver 證書的根證書;
--client-certificate、--client-key:剛生成的 admin 證書和私鑰,連接 kube-apiserver 時使用;
--embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑);
分發 kubeconfig 文件
分發到所有使用 kubectl 命令的節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "mkdir -p ~/.kube"
scp kubectl.kubeconfig k8s@${node_ip}:~/.kube/config
ssh root@${node_ip} "mkdir -p ~/.kube"
scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
done
保存到用戶的 ~/.kube/config 文件;
etcd安裝
到 https://github.com/coreos/etcd/releases 頁面下載最新版本的發布包:
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
分發二進制文件到集群所有節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-v3.3.10-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/bin
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
創建 etcd 證書和私鑰
創建證書簽名請求:
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.86.156",
"192.168.86.155",
"192.168.86.154"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*
分發生成的證書和私鑰到各 etcd 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert"
scp etcd*.pem k8s@${node_ip}:/etc/etcd/cert/
done
ETCD_NODE_IPS
創建 etcd 的 systemd unit 模板文件
cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
User=k8s
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \
--data-dir=/var/lib/etcd \
--name=##NODE_NAME## \
--cert-file=/etc/etcd/cert/etcd.pem \
--key-file=/etc/etcd/cert/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/cert/ca.pem \
--peer-cert-file=/etc/etcd/cert/etcd.pem \
--peer-key-file=/etc/etcd/cert/etcd-key.pem \
--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--listen-peer-urls=https://##NODE_IP##:2380 \
--initial-advertise-peer-urls=https://##NODE_IP##:2380 \
--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://##NODE_IP##:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=${ETCD_NODES} \
--initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
User:指定以 k8s 賬戶運行;
WorkingDirectory、--data-dir:指定工作目錄和數據目錄為 /var/lib/etcd,需在啟動服務前創建這個目錄;
--name:指定節點名稱,當 --initial-cluster-state 值為 new 時,--name 的參數值必須位於 --initial-cluster 列表中;
--cert-file、--key-file:etcd server 與 client 通信時使用的證書和私鑰;
--trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
--peer-cert-file、--peer-key-file:etcd 與 peer 通信使用的證書和私鑰;
--peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;
為各節點創建和分發 etcd systemd unit 文件
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service
done
ls *.service
分發生成的 systemd unit 文件:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd"
scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
啟動 etcd 服務
source ./environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd &"
done
etcd 進程首次啟動時會等待其它節點的 etcd 加入集群,命令 systemctl start etcd 會卡住一段時間,為正常現象。
檢查啟動結果
source ./environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status etcd|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
$ journalctl -u etcd
驗證服務狀態
部署完 etcd 集群后,在任一 etc 節點上執行如下命令:
source ./environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ETCDCTL_API=3 /opt/k8s/bin/etcdctl
--endpoints=https://${node_ip}:2379
--cacert=/etc/kubernetes/cert/ca.pem
--cert=/etc/etcd/cert/etcd.pem
--key=/etc/etcd/cert/etcd-key.pem endpoint health
done
預期輸出:
192.168.86.154
https://192.168.86.154:2379 is healthy: successfully committed proposal: took = 2.197007ms192.168.86.155
https://192.168.86.155:2379 is healthy: successfully committed proposal: took = 2.299328ms192.168.86.156
https://192.168.86.156:2379 is healthy: successfully committed proposal: took = 2.014274ms
05.部署 flannel 網絡
kubernetes 要求集群內各節點(包括 master 節點)能通過 Pod 網段互聯互通。flannel 使用 vxlan 技術為各節點創建一個可以互通的 Pod 網絡,使用的端口為 UDP 8472,需要開放該端口(如公有雲 AWS 等)。
flannel 第一次啟動時,從 etcd 獲取 Pod 網段信息,為本節點分配一個未使用的 /24 段地址,然后創建 flannel.1(也可能是其它名稱,如 flannel1 等) 接口。
flannel 將分配的 Pod 網段信息寫入 /run/flannel/docker 文件,docker 后續使用這個文件中的環境變量設置 docker0 網橋。
下載和分發 flanneld 二進制文件
到 https://github.com/coreos/flannel/releases 頁面下載最新版本的發布包:
mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
創建證書簽名請求:
cat > flanneld-csr.json <<EOF
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
ls flanneld*pem
分發 flanneld 二進制文件和flannel 證書、私鑰 到集群所有節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
ssh root@${node_ip} "mkdir -p /etc/flanneld/cert && chown -R k8s /etc/flanneld"
scp flanneld*.pem k8s@${node_ip}:/etc/flanneld/cert
done
創建
flannel 從 etcd 集群存取網段分配信息,而 etcd 集群啟用了雙向 x509 證書認證,所以需要為 flanneld 生成證書和私鑰。
向 etcd 寫入集群 Pod 網段信息
注意:本步驟只需執行一次。
source /opt/k8s/bin/environment.sh
etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
flanneld 當前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段數據;
寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 --cluster-cidr 參數值一致;
創建 flanneld 的 systemd unit 文件
source /opt/k8s/bin/environment.sh
export IFACE=eno1 # 有的為em1,eth0
cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \\
-etcd-cafile=/etc/kubernetes/cert/ca.pem \\
-etcd-certfile=/etc/flanneld/cert/flanneld.pem \\
-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\
-etcd-endpoints=${ETCD_ENDPOINTS} \\
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
-iface=${IFACE}
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,后續 docker 啟動時使用這個文件中的環境變量配置 docker0 網橋;
flanneld 使用系統缺省路由所在的接口與其它節點通信,對於有多個網絡接口(如內網和公網)的節點,可以用 -iface 參數指定通信接口,如上面的 eth0 接口;
flanneld 運行時需要 root 權限;
完整 unit 見 flanneld.service
注意:
有的IFACE=eno1,有的為em1,eth,通過ifconfig查看
分發 flanneld systemd unit 文件到所有節點
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp flanneld.service root@${node_ip}:/etc/systemd/system/
done
啟動 flanneld 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
done
檢查啟動結果
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status flanneld|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
$ journalctl -u flanneld
檢查分配給各 flanneld 的 Pod 網段信息
查看集群 Pod 網段(/16):
source /opt/k8s/bin/environment.sh
etcdctl
--endpoints=${ETCD_ENDPOINTS}
--ca-file=/etc/kubernetes/cert/ca.pem
--cert-file=/etc/flanneld/cert/flanneld.pem
--key-file=/etc/flanneld/cert/flanneld-key.pem
get ${FLANNEL_ETCD_PREFIX}/config
輸出:
{"Network":"170.22.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}
查看已分配的 Pod 子網段列表(/24):
source /opt/k8s/bin/environment.sh
etcdctl
--endpoints=${ETCD_ENDPOINTS}
--ca-file=/etc/kubernetes/cert/ca.pem
--cert-file=/etc/flanneld/cert/flanneld.pem
--key-file=/etc/flanneld/cert/flanneld-key.pem
ls ${FLANNEL_ETCD_PREFIX}/subnets
輸出:
/kubernetes/network/subnets/170.22.76.0-24
/kubernetes/network/subnets/170.22.84.0-24
/kubernetes/network/subnets/170.22.45.0-24
/kubernetes/network/subnets/170.22.7.0-24
/kubernetes/network/subnets/170.22.12.0-24
/kubernetes/network/subnets/170.22.78.0-24
/kubernetes/network/subnets/170.22.5.0-24
查看某一 Pod 網段對應的節點 IP 和 flannel 接口地址:
source /opt/k8s/bin/environment.sh
etcdctl
--endpoints=${ETCD_ENDPOINTS}
--ca-file=/etc/kubernetes/cert/ca.pem
--cert-file=/etc/flanneld/cert/flanneld.pem
--key-file=/etc/flanneld/cert/flanneld-key.pem
get ${FLANNEL_ETCD_PREFIX}/subnets/170.22.76.0-24
輸出:
{"PublicIP":"192.168.86.156","BackendType":"vxlan","BackendData":{"VtepMAC":"6a:aa:ca:8a:ac:ed"}}
驗證各節點能通過 Pod 網段互通
在各節點上部署 flannel 后,檢查是否創建了 flannel 接口(名稱可能為 flannel0、flannel.0、flannel.1 等):
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
done
輸出:
inet 172.30.81.0/32 scope global flannel.1
inet 172.30.29.0/32 scope global flannel.1
inet 172.30.39.0/32 scope global flannel.1
在各節點上 ping 所有 flannel 接口 IP,確保能通:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh ${node_ip} "ping -c 1 172.30.81.0"
ssh ${node_ip} "ping -c 1 172.30.29.0"
ssh ${node_ip} "ping -c 1 172.30.39.0"
done
06-0.部署 master 節點
kubernetes master 節點運行如下組件:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-scheduler 和 kube-controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。
對於 kube-apiserver,可以運行多個實例(本文檔是 3 實例),但對其它組件需要提供統一的訪問地址,該地址需要高可用。本文檔使用 keepalived 和 haproxy 實現 kube-apiserver VIP 高可用和負載均衡。
下載最新版本的二進制文件
從 CHANGELOG頁面 下載 server tarball 文件(需要翻牆)
wget https://dl.k8s.io/v1.12.1/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
將二進制文件拷貝到所有 所有節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kubernetes/server/bin/* k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
如果有老版本運行,先停止:
systemctl stop kubelet.service
systemctl stop kube-controller-manager.service
systemctl stop kube-apiserver.service
systemctl stop kube-proxy.service
systemctl stop kube-scheduler.service
systemctl stop etcd
systemctl stop
06-1.部署高可用組件(keepalived+haproxy)
使用 keepalived 和 haproxy 實現 kube-apiserver 高可用的步驟:
- keepalived 提供 kube-apiserver 對外服務的 VIP;
- haproxy 監聽 VIP,后端連接所有 kube-apiserver 實例,提供健康檢查和負載均衡功能;
- 運行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
本文檔復用 master 節點的三台機器,haproxy 監聽的端口(8443) 需要與 kube-apiserver 的端口 6443 不同,避免沖突。
keepalived 在運行過程中周期檢查本機的 haproxy 進程狀態,如果檢測到 haproxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和 haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。
安裝軟件包
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "yum install -y keepalived haproxy"
done
ubuntu機器,apt-get install
配置和下發 haproxy 配置文件
haproxy 配置文件:
cat > haproxy.cfg <<EOF
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /var/run/haproxy-admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 10m
timeout server 10m
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth admin:123456
stats hide-version
stats admin if TRUE
listen kube-master
bind 0.0.0.0:8443
mode tcp
option tcplog
balance source
server 192.168.86.154 192.168.86.154:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.86.155 192.168.86.155:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.86.156 192.168.86.156:6443 check inter 2000 fall 2 rise 2 weight 1
EOF
- haproxy 在 10080 端口輸出 status 信息;
- haproxy 監聽所有接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致;
- server 字段列出所有 kube-apiserver 監聽的 IP 和端口;
下發 haproxy.cfg 到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp haproxy.cfg root@${node_ip}:/etc/haproxy
done
起 haproxy 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl restart haproxy"
done
檢查 haproxy 服務狀態
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status haproxy|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
192.168.86.154
Active: active (running) since Tue 2018-11-06 10:48:13 CST; 5s ago192.168.86.155
Active: active (running) since Tue 2018-11-06 10:48:14 CST; 5s ago192.168.86.156
Active: active (running) since Tue 2018-11-06 10:48:13 CST; 5s ago
journalctl -u haproxy
檢查 haproxy 是否監聽 8443 端口:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "netstat -lnpt|grep haproxy"
done
確保輸出類似於:
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 45606/haproxy
配置和下發 keepalived 配置文件
keepalived 是一主(master)多備(backup)運行模式,故有兩種類型的配置文件。master 配置文件只有一份,backup 配置文件視節點數目而定,對於本文檔而言,規划如下:
master: 192.168.86.156
backup:192.168.86.155,192.168.86.154
master 配置文件:
source /opt/k8s/bin/environment.sh
cat > keepalived-master.conf <<EOF
global_defs {
router_id lb-master-105
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
weight -30
}
vrrp_instance VI-kube-master {
state MASTER
priority 120
dont_track_primary
interface ${VIP_IF}
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
${MASTER_VIP}
}
}
EOF
VIP 所在的接口(interface ${VIP_IF})為 em1
使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套 keepalived HA,則必須各不相同;
backup 配置文件:
source /opt/k8s/bin/environment.sh
cat > keepalived-backup.conf <<EOF
global_defs {
router_id lb-backup-105
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
weight -30
}
vrrp_instance VI-kube-master {
state BACKUP
priority 110
dont_track_primary
interface ${VIP_IF}
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
${MASTER_VIP}
}
}
EOF
VIP 所在的接口(interface ${VIP_IF})為 em1
使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套 keepalived HA,則必須各不相同;
priority 的值必須小於 master 的值;
下發 keepalived 配置文件
下發 master 配置文件:
scp keepalived-master.conf root@172.27.129.105:/etc/keepalived/keepalived.conf
下發 backup 配置文件:
scp keepalived-backup.conf root@172.27.129.111:/etc/keepalived/keepalived.conf
scp keepalived-backup.conf root@172.27.129.112:/etc/keepalived/keepalived.conf
起 keepalived 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl restart keepalived"
done
檢查 keepalived 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status keepalived|grep Active"
done
確保狀態為 active (running),否則查看日志(journalctl -u keepalived),確認原因:
192.168.86.154
Active: active (running) since Tue 2018-11-06 10:54:01 CST; 17s ago192.168.86.155
Active: active (running) since Tue 2018-11-06 10:54:03 CST; 18s ago192.168.86.156
Active: active (running) since Tue 2018-11-06 10:54:03 CST; 17s ago
查看 VIP 所在的節點,確保可以 ping 通 VIP:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh ${node_ip} "/usr/sbin/ip addr show ${VIP_IF}"
ssh ${node_ip} "ping -c 1 ${MASTER_VIP}"
done
查看 haproxy 狀態頁面
瀏覽器訪問 ${MASTER_VIP}:10080/status 地址,查看 haproxy 狀態頁面:
06-1.部署 kube-apiserver 組件
使用 keepalived 和 haproxy 部署一個 3 節點高可用 master 集群的步驟,對應的 LB VIP 為環境變量 ${MASTER_VIP}。
創建 kubernetes 證書和私鑰
source /opt/k8s/bin/environment.sh
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.86.156",
"192.168.86.155",
"192.168.86.154",
"192.168.86.9",
"${MASTER_VIP}",
"${CLUSTER_KUBERNETES_SVC_IP}",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.local",
"kubernetes.default.svc.local.com"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
- hosts 字段指定授權使用該證書的 IP 或域名列表,這里列出了 VIP 、apiserver 節點 IP、kubernetes 服務 IP 和域名
- 域名最后字符不能是 .(如不能為 kubernetes.default.svc.cluster.local.),否則解析時失敗,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
- 如果使用非 cluster.local 域名,如 opsnull.com,則需要修改域名列表中的最后兩個域名為:kubernetes.default.svc.opsnull、kubernetes.default.svc.opsnull.com
- kubernetes 服務 IP 是 apiserver 自動創建的,一般是 --service-cluster-ip-range 參數指定的網段的第一個IP,后續可以通過如下命令獲取:kubectl get svc kubernetes
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
ls kubernetes*pem
將生成的證書和私鑰文件拷貝到 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert/ && sudo chown -R k8s /etc/kubernetes/cert/"
scp kubernetes*.pem k8s@${node_ip}:/etc/kubernetes/cert/
done
k8s 賬戶可以讀寫 /etc/kubernetes/cert/ 目錄;
創建加密配置文件
source /opt/k8s/bin/environment.sh
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
將加密配置文件拷貝到 master 節點的 /etc/kubernetes 目錄下:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
done
創建 kube-apiserver systemd unit 模板文件
source /opt/k8s/bin/environment.sh
cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
--advertise-address=##NODE_IP## \\
--bind-address=##NODE_IP## \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=${NODE_PORT_RANGE} \\
--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
--service-account-key-file=/etc/kubernetes/cert/ca-key.pem \\
--etcd-cafile=/etc/kubernetes/cert/ca.pem \\
--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
--etcd-servers=${ETCD_ENDPOINTS} \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
Type=notify
User=k8s
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
- --experimental-encryption-provider-config:啟用加密特性;
- --authorization-mode=Node,RBAC: 開啟 Node 和 RBAC 授權模式,拒絕未授權的請求;
- --enable-admission-plugins:啟用 ServiceAccount 和 NodeRestriction;
- --service-account-key-file:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file 指定私鑰文件,兩者配對使用;
- --tls-*-file:指定 apiserver 使用的證書、私鑰和 CA 文件。--client-ca-file 用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
- --kubelet-client-certificate、--kubelet-client-key:如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應的用戶(上面 kubernetes*.pem 證書的用戶為 kubernetes) 用戶定義 RBAC 規則,否則訪問 kubelet API * 時提示未授權;
- --bind-address: 不能為 127.0.0.1,否則外界不能訪問它的安全端口 6443;
- --insecure-port=0:關閉監聽非安全端口(8080);
- --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
- --service-node-port-range: 指定 NodePort 的端口范圍;
- --runtime-config=api/all=true: 啟用所有版本的 APIs,如 autoscaling/v2alpha1;
- --enable-bootstrap-token-auth:啟用 kubelet bootstrap 的 token 認證;
- --apiserver-count=3:指定集群運行模式,多台 kube-apiserver 會通過 leader 選舉產生一個工作節點,其它節點處於阻塞狀態;
- User=k8s:使用 k8s 賬戶運行;
為各節點創建和分發 kube-apiserver systemd unit 文件
替換模板文件中的變量,為各節點創建 systemd unit 文件:
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service
done
ls kube-apiserver*.service
分發生成的 systemd unit 文件
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
done
啟動 kube-apiserver 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
done
檢查 kube-apiserver 運行狀態
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
done
確保狀態為 active (running),否則到 master 節點查看日志,確認原因:
journalctl -u kube-apiserver
打印 kube-apiserver 寫入 etcd 的數據
source /opt/k8s/bin/environment.sh
ETCDCTL_API=3 etcdctl
--endpoints=${ETCD_ENDPOINTS}
--cacert=/etc/kubernetes/cert/ca.pem
--cert=/etc/etcd/cert/etcd.pem
--key=/etc/etcd/cert/etcd-key.pem
get /registry/ --prefix --keys-only
檢查集群信息
kubectl cluster-info
kubectl get all --all-namespaces
kubectl get componentstatuses
檢查 kube-apiserver 監聽的端口
sudo netstat -lnpt|grep kube
tcp 0 0 172.27.129.105:6443 0.0.0.0:* LISTEN 13075/kube-apiserve
6443: 接收 https 請求的安全端口,對所有請求做認證和授權;
由於關閉了非安全端口,故沒有監聽 8080;
授予 kubernetes 證書訪問 kubelet API 的權限
在執行 kubectl exec、run、logs 等命令時,apiserver 會轉發到 kubelet。這里定義 RBAC 規則,授權 apiserver 調用 kubelet API。
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
06-3.部署高可用 kube-controller-manager 集群
該集群包含 3 個節點,啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩余節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:
與 kube-apiserver 的安全端口通信時;
在安全端口(https,10252) 輸出 prometheus 格式的 metrics;
創建 kube-controller-manager 證書和私鑰
創建證書簽名請求:
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.86.156",
"192.168.86.155",
"192.168.86.154"
],
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "4Paradigm"
}
]
}
EOF
hosts 列表包含所有 kube-controller-manager 節點 IP;
CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限。
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
將生成的證書和私鑰分發到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager*.pem k8s@${node_ip}:/etc/kubernetes/cert/
done
創建和分發 kubeconfig 文件
kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分發 kubeconfig 到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager.kubeconfig k8s@${node_ip}:/etc/kubernetes/
done
創建和分發 kube-controller-manager systemd unit 文件
source /opt/k8s/bin/environment.sh
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \\
--port=0 \\
--secure-port=10252 \\
--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
--experimental-cluster-signing-duration=17520h \\
--root-ca-file=/etc/kubernetes/cert/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on
Restart=on-failure
RestartSec=5
User=k8s
[Install]
WantedBy=multi-user.target
EOF
- --port=0:關閉監聽 http /metrics 的請求,同時 --address 參數無效,--bind-address 參數有效;
- --secure-port=10252、--bind-address=0.0.0.0: 在所有網絡接口監聽 10252 端口的 https /metrics 請求;
- --kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;
- --cluster-signing-*-file:簽名 TLS Bootstrap 創建的證書;
- --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
- --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;
- --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用;
- --service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;
- --leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;
- --feature-gates=RotateKubeletServerCertificate=true:開啟 kublet server 證書的自動更新特性;
- --controllers=*,bootstrapsigner,tokencleaner:啟用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;
- --horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1;
- --tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和秘鑰;
- --use-service-account-credentials=true:
- User=k8s:使用 k8s 賬戶運行;
- kube-controller-manager 不對請求 https metrics 的 Client 證書進行校驗,故不需要指定 --tls-ca-file 參數,而且該參數已被淘汰。
分發 systemd unit 文件到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager.service root@${node_ip}:/etc/systemd/system/
done
kube-controller-manager 的權限
ClusteRole: system:kube-controller-manager 的權限很小,只能創建 secret、serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole system:controller:XXX 中。
需要在 kube-controller-manager 的啟動參數中添加 --use-service-account-credentials=true 參數,這樣 main controller 會為各 controller 創建對應的 ServiceAccount XXX-controller。
內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。
啟動 kube-controller-manager 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done
必須先創建日志目錄;
檢查服務運行狀態
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status kube-controller-manager|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
$ journalctl -u kube-controller-manager
查看輸出的 metric
注意:以下命令在 kube-controller-manager 節點上執行。
kube-controller-manager 監聽 10252 端口,接收 https 請求:
$ sudo netstat -lnpt|grep kube-controll
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 18377/kube-controll
$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://127.0.0.1:10252/metrics |head
# HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_adds counter
ClusterRoleAggregator_adds 3
# HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_depth gauge
ClusterRoleAggregator_depth 0
# HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.
# TYPE ClusterRoleAggregator_queue_latency summary
ClusterRoleAggregator_queue_latency{quantile="0.5"} 57018
ClusterRoleAggregator_queue_latency{quantile="0.9"} 57268
curl --cacert CA 證書用來驗證 kube-controller-manager https server 證書;
測試 kube-controller-manager 集群的高可用
停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日志,看是否獲取了 leader 權限。
查看當前的 leader
$ kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"docker86-155_32dbaca9-e15f-11e8-87e7-e0db5521eb14","leaseDurationSeconds":15,"acquireTime":"2018-11-06T00:59:52Z","renewTime":"2018-11-06T01:34:01Z","leaderTransitions":39}'
creationTimestamp: 2018-10-10T15:18:11Z
name: kube-controller-manager
namespace: kube-system
resourceVersion: "6281708"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: b38d3ea9-cc9f-11e8-9cde-d4ae52a3b675
可見,當前的 leader 為docker86-155 節點。
參考
關於 controller 權限和 use-service-account-credentials 參數:https://github.com/kubernetes/kubernetes/issues/48208
kublet 認證和授權:https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization
06-3.部署高可用 kube-scheduler 集群
該集群包含 3 個節點,啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩余節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:
與 kube-apiserver 的安全端口通信;
在安全端口(https,10251) 輸出 prometheus 格式的 metrics;
創建 kube-scheduler 證書和私鑰
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.86.156",
"192.168.86.155",
"192.168.86.154"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "4Paradigm"
}
]
}
EOF
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
創建和分發 kubeconfig 文件
kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
分發 kubeconfig 到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-scheduler.kubeconfig k8s@${node_ip}:/etc/kubernetes/
done
創建和分發 kube-scheduler systemd unit 文件
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \\
--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
User=k8s
[Install]
WantedBy=multi-user.target
EOF
--address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
--kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;
--leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;
User=k8s:使用 k8s 賬戶運行;
分發 systemd unit 文件到所有 master 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-scheduler.service root@${node_ip}:/etc/systemd/system/
done
啟動 kube-scheduler 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done
必須先創建日志目錄;
檢查服務運行狀態
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status kube-scheduler|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
journalctl -u kube-scheduler
查看輸出的 metric
注意:以下命令在 kube-scheduler 節點上執行。
kube-scheduler 監聽 10251 端口,接收 http 請求:
$ sudo netstat -lnpt|grep kube-sche
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 23783/kube-schedule
$ curl -s http://127.0.0.1:10251/metrics |head
HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
HELP go_gc_duration_seconds A summary of the GC invocation durations.
TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 9.7715e-05
go_gc_duration_seconds{quantile="0.25"} 0.000107676
go_gc_duration_seconds{quantile="0.5"} 0.00017868
go_gc_duration_seconds{quantile="0.75"} 0.000262444
go_gc_duration_seconds{quantile="1"} 0.001205223
測試 kube-scheduler 集群的高可用
隨便找一個或兩個 master 節點,停掉 kube-scheduler 服務,看其它節點是否獲取了 leader 權限(systemd 日志)。
查看當前的 leader
$ kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node3_61f34593-6cc8-11e8-8af7-5254002f288e","leaseDurationSeconds":15,"acquireTime":"2018-06-10T16:09:56Z","renewTime":"2018-06-10T16:20:54Z","leaderTransitions":1}'
creationTimestamp: 2018-06-10T16:07:33Z
name: kube-scheduler
namespace: kube-system
resourceVersion: "4645"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 62382d98-6cc8-11e8-96fa-525400ba84c6
07-1.部署 docker 組件
docker 是容器的運行環境,管理它的生命周期。kubelet 通過 Container Runtime Interface (CRI) 與 docker 進行交互。
安裝依賴包
下載和分發 docker 二進制文件
到 http://mirrors.ustc.edu.cn/docker-ce/linux/static/stable/x86_64/ 頁面下載最新發布包:
wget http://mirrors.ustc.edu.cn/docker-ce/linux/static/stable/x86_64/docker-18.06.1-ce.tgz
tar -xvf docker-18.06.1-ce.tgz
分發二進制文件到所有 worker 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp docker/docker* k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
創建和分發 systemd unit 文件
cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
-
EOF 前后有雙引號,這樣 bash 不會替換文檔中的變量,如 $DOCKER_NETWORK_OPTIONS;
-
dockerd 運行時會調用其它 docker 命令,如 docker-proxy,所以需要將 docker 命令所在的目錄加到 PATH 環境變量中;
-
flanneld 啟動時將網絡配置寫入
/run/flannel/docker文件中,dockerd 啟動前讀取該文件中的環境變量DOCKER_NETWORK_OPTIONS,然后設置 docker0 網橋網段; -
如果指定了多個
EnvironmentFile選項,則必須將/run/flannel/docker放在最后(確保 docker0 使用 flanneld 生成的 bip 參數); -
docker 需要以 root 用於運行;
-
docker 從 1.13 版本開始,可能將 iptables FORWARD chain的默認策略設置為DROP,從而導致 ping 其它 Node 上的 Pod IP 失敗,遇到這種情況時,需要手動設置策略為
ACCEPT:$ sudo iptables -P FORWARD ACCEPT並且把以下命令寫入
/etc/rc.local文件中,防止節點重啟iptables FORWARD chain的默認策略又還原為DROP/sbin/iptables -P FORWARD ACCEPT
完整 unit 見 docker.service
分發 systemd unit 文件到所有 worker 機器:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp docker.service root@${node_ip}:/etc/systemd/system/
done
配置和分發 docker 配置文件
使用國內的倉庫鏡像服務器以加快 pull image 的速度,同時增加下載的並發數 (需要重啟 dockerd 生效):
cat > docker-daemon.json <<EOF
{
"insecure-registries":["192.168.86.8:5000","registry.xxx.com"],
"registry-mirrors": ["https://jk4bb75a.mirror.aliyuncs.com", "https://docker.mirrors.ustc.edu.cn"],
"max-concurrent-downloads": 20
}
EOF
分發 docker 配置文件到所有 work 節點:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/docker/"
scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
done
啟動 docker 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld"
ssh root@${node_ip} "/usr/sbin/iptables -F && /usr/sbin/iptables -X && /usr/sbin/iptables -F -t nat && /usr/sbin/iptables -X -t nat"
ssh root@${node_ip} "/usr/sbin/iptables -P FORWARD ACCEPT"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
ssh root@${node_ip} 'for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > $intf/hairpin_mode; done'
ssh root@${node_ip} "sudo sysctl -p /etc/sysctl.d/kubernetes.conf"
done
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl restart docker"
done
- 關閉 firewalld(centos7)/ufw(ubuntu16.04),否則可能會重復創建 iptables 規則;
- 清理舊的 iptables rules 和 chains 規則;
- 開啟 docker0 網橋下虛擬網卡的 hairpin 模式;
檢查服務運行狀態
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status docker|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
$ journalctl -u docker
檢查 docker0 網橋
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
done
確認各 work 節點的 docker0 網橋和 flannel.1 接口的 IP 處於同一個網段中(如下 172.30.39.0 和 172.30.39.1):
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether ce:2f:d6:53:e5:f3 brd ff:ff:ff:ff:ff:ff
inet 172.30.39.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::cc2f:d6ff:fe53:e5f3/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bf:65:16:5c brd ff:ff:ff:ff:ff:ff
inet 172.30.39.1/24 brd 172.30.39.255 scope global docker0
valid_lft forever preferred_lft forever
07-2.部署 kubelet 組件
kublet 運行在每個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。
kublet 啟動時自動向 kube-apiserver 注冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況。
為確保安全,本文檔只開啟接收 https 請求的安全端口,對請求進行認證和授權,拒絕未授權的訪問(如 apiserver、heapster)。
創建 kubelet bootstrap kubeconfig 文件
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
# 創建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_name} \
--kubeconfig ~/.kube/config)
# 設置集群參數
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 設置上下文參數
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 設置默認上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
done
- 證書中寫入 Token 而非證書,證書后續由 controller-manager 創建。
查看 kubeadm 為各節點創建的 token:
$ kubeadm token list --kubeconfig ~/.kube/config
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
k0s2bj.7nvw1zi1nalyz4gz 23h 2018-06-14T15:14:31+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node1
mkus5s.vilnjk3kutei600l 23h 2018-06-14T15:14:32+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node3
zkiem5.0m4xhw0jc8r466nk 23h 2018-06-14T15:14:32+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node2
- 創建的 token 有效期為 1 天,超期后將不能再被使用,且會被 kube-controller-manager 的 tokencleaner 清理(如果啟用該 controller 的話);
- kube-apiserver 接收 kubelet 的 bootstrap token 后,將請求的 user 設置為 system:bootstrap:
,group 設置為 system:bootstrappers;
各 token 關聯的 Secret:
$ kubectl get secrets -n kube-system
NAME TYPE DATA AGE
bootstrap-token-k0s2bj bootstrap.kubernetes.io/token 7 1m
bootstrap-token-mkus5s bootstrap.kubernetes.io/token 7 1m
bootstrap-token-zkiem5 bootstrap.kubernetes.io/token 7 1m
default-token-99st7 kubernetes.io/service-account-token 3 2d
分發 bootstrap kubeconfig 文件到所有 worker 節點
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kubelet-bootstrap-${node_name}.kubeconfig k8s@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done
創建和分發 kubelet 參數配置文件
從 v1.10 開始,kubelet 部分參數需在配置文件中配置,kubelet --help 會提示:
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
創建 kubelet 參數配置模板文件:
source /opt/k8s/bin/environment.sh
cat > kubelet.config.json.template <<EOF
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "##NODE_IP##",
"port": 10250,
"readOnlyPort": 0,
"cgroupDriver": "cgroupfs",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "${CLUSTER_DNS_DOMAIN}",
"clusterDNS": ["${CLUSTER_DNS_SVC_IP}"]
}
EOF
- address:API 監聽地址,不能為 127.0.0.1,否則 kube-apiserver、heapster 等不能調用 kubelet 的 API;
- readOnlyPort=0:關閉只讀端口(默認 10255),等效為未指定;
- authentication.anonymous.enabled:設置為 false,不允許匿名訪問 10250 端口;
- authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTP 證書認證;
- authentication.webhook.enabled=true:開啟 HTTPs bearer token 認證;
- 對於未通過 x509 證書和 webhook 認證的請求(kube-apiserver 或其他客戶端),將被拒絕,提示 Unauthorized;
- authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具有操作資源的權限(RBAC);
- featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決於 kube-controller-manager 的 --experimental-cluster-signing-duration 參數;
- 需要 root 賬戶運行;
為各節點創建和分發 kubelet 配置文件:
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
sed -e "s/##NODE_IP##/${node_ip}/" kubelet.config.json.template > kubelet.config-${node_ip}.json
scp kubelet.config-${node_ip}.json root@${node_ip}:/etc/kubernetes/kubelet.config.json
done
替換后的 kubelet.config.json 文件: kubelet.config.json
創建和分發 kubelet systemd unit 文件
創建 kubelet systemd unit 文件模板:
cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/cert \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet.config.json \\
--hostname-override=##NODE_NAME## \\
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\
--allow-privileged=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- 如果設置了
--hostname-override選項,則kube-proxy也需要設置該選項,否則會出現找不到 Node 的情況; --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;- K8S approve kubelet 的 csr 請求后,在
--cert-dir目錄創建證書和私鑰文件,然后寫入--kubeconfig文件;
替換后的 unit 文件:kubelet.service
為各節點創建和分發 kubelet systemd unit 文件:
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done
Bootstrap Token Auth 和授予權限
kublet 啟動時查找配置的 --kubeletconfig 文件是否存在,如果不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發送證書簽名請求 (CSR)。
kube-apiserver 收到 CSR 請求后,對其中的 Token 進行認證(事先使用 kubeadm 創建的 token),認證通過后將請求的 user 設置為 system:bootstrap:
默認情況下,這個 user 和 group 沒有創建 CSR 的權限:q,kubelet 啟動失敗,錯誤日志如下:
$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
解決辦法是:創建一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:
$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
啟動 kubelet 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/lib/kubelet"
ssh root@${node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done
- 關閉 swap 分區,否則 kubelet 會啟動失敗;
- 必須先創建工作和日志目錄;
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl restart kubelet && systemctl status kubelet|grep Active:"
done
$ journalctl -u kubelet |tail
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.388242 22343 feature_gate.go:226] feature gates: &{{} map[RotateKubeletServerCertificate:true RotateKubeletClientCertificate:true]}
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.394342 22343 mount_linux.go:211] Detected OS with systemd
Jun 13 16:05:40 kube-node2 kubelet[22343]: W0613 16:05:40.394494 22343 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.399508 22343 server.go:376] Version: v1.10.4
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.399583 22343 feature_gate.go:226] feature gates: &{{} map[RotateKubeletServerCertificate:true RotateKubeletClientCertificate:true]}
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.399736 22343 plugins.go:89] No cloud provider specified.
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.399752 22343 server.go:492] No cloud provider specified: "" from the config file: ""
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.399777 22343 bootstrap.go:58] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.446068 22343 csr.go:105] csr for this node already exists, reusing
Jun 13 16:05:40 kube-node2 kubelet[22343]: I0613 16:05:40.453761 22343 csr.go:113] csr for this node is still valid
kubelet 啟動后使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 后,kube-controller-manager 為 kubelet 創建 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,才會為 TLS Bootstrap 創建證書和私鑰。
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk 43s system:bootstrap:zkiem5 Pending
node-csr-oVbPmU-ikVknpynwu0Ckz_MvkAO_F1j0hmbcDa__sGA 27s system:bootstrap:mkus5s Pending
node-csr-u0E1-ugxgotO_9FiGXo8DkD6a7-ew8sX2qPE6KPS2IY 13m system:bootstrap:k0s2bj Pending
$ kubectl get nodes
No resources found.
- 三個 work 節點的 csr 均處於 pending 狀態;
approve kubelet CSR 請求
可以手動或自動 approve CSR 請求。推薦使用自動的方式,因為從 v1.8 版本開始,可以自動輪轉approve csr 后生成的證書。
手動 approve CSR 請求
查看 CSR 列表:
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk 43s system:bootstrap:zkiem5 Pending
node-csr-oVbPmU-ikVknpynwu0Ckz_MvkAO_F1j0hmbcDa__sGA 27s system:bootstrap:mkus5s Pending
node-csr-u0E1-ugxgotO_9FiGXo8DkD6a7-ew8sX2qPE6KPS2IY 13m system:bootstrap:k0s2bj Pending
approve CSR:
$ kubectl certificate approve node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk
certificatesigningrequest.certificates.k8s.io "node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk" approved
查看 Approve 結果:
$ kubectl describe csr node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk
Name: node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 13 Jun 2018 16:05:04 +0800
Requesting User: system:bootstrap:zkiem5
Status: Approved
Subject:
Common Name: system:node:kube-node2
Serial Number:
Organization: system:nodes
Events: <none>
Requesting User:請求 CSR 的用戶,kube-apiserver 對它進行認證和授權;Subject:請求簽名的證書信息;- 證書的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授權模式會授予該證書的相關權限;
自動 approve CSR 請求
創建三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:
cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
- auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 為 system:bootstrappers;
- node-client-cert-renewal:自動 approve node 后續過期的 client 證書,自動生成的證書 Group 為 system:nodes;
- node-server-cert-renewal:自動 approve node 后續過期的 server 證書,自動生成的證書 Group 為 system:nodes;
生效配置:
$ kubectl apply -f csr-crb.yaml
查看 kublet 的情況
等待一段時間(1-10 分鍾),三個節點的 CSR 都被自動 approve:
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-98h25 6m system:node:kube-node2 Approved,Issued
csr-lb5c9 7m system:node:kube-node3 Approved,Issued
csr-m2hn4 14m system:node:kube-node1 Approved,Issued平時
node-csr-7q7i0q4MF_K2TSEJj16At4CJFLlJkHIqei6nMIAaJCU 28m system:bootstrap:k0s2bj Approved,Issued
node-csr-ND77wk2P8k2lHBtgBaObiyYw0uz1Um7g2pRvveMF-c4 35m system:bootstrap:mkus5s Approved,Issued
node-csr-Nysmrw55nnM48NKwEJuiuCGmZoxouK4N8jiEHBtLQso 6m system:bootstrap:zkiem5 Approved,Issued
node-csr-QzuuQiuUfcSdp3j5W4B2UOuvQ_n9aTNHAlrLzVFiqrk 1h system:bootstrap:zkiem5 Approved,Issued
node-csr-oVbPmU-ikVknpynwu0Ckz_MvkAO_F1j0hmbcDa__sGA 1h system:bootstrap:mkus5s Approved,Issued
node-csr-u0E1-ugxgotO_9FiGXo8DkD6a7-ew8sX2qPE6KPS2IY 1h system:bootstrap:k0s2bj Approved,Issued
所有節點均 ready:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-node1 Ready <none> 18m v1.10.4
kube-node2 Ready <none> 10m v1.10.4
kube-node3 Ready <none> 11m v1.10.4
kube-controller-manager 為各 node 生成了 kubeconfig 文件和公私鑰:
$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2293 Jun 13 17:07 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/cert/|grep kubelet
-rw-r--r-- 1 root root 1046 Jun 13 17:07 kubelet-client.crt
-rw------- 1 root root 227 Jun 13 17:07 kubelet-client.key
-rw------- 1 root root 1334 Jun 13 17:07 kubelet-server-2018-06-13-17-07-45.pem
lrwxrwxrwx 1 root root 58 Jun 13 17:07 kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2018-06-13-17-07-45.pem
- kubelet-server 證書會周期輪轉;
kubelet 提供的 API 接口
kublet 啟動后監聽多個端口,用於接收 kube-apiserver 或其它組件發送的請求:
$ sudo netstat -lnpt|grep kubelet
tcp 0 0 172.27.129.111:4194 0.0.0.0:* LISTEN 2490/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2490/kubelet
tcp 0 0 172.27.129.111:10250 0.0.0.0:* LISTEN 2490/kubelet
- 4194: cadvisor http 服務;
- 10248: healthz http 服務;
- 10250: https API 服務;注意:未開啟只讀端口 10255;
例如執行 kubectl ec -it nginx-ds-5rmws -- sh 命令時,kube-apiserver 會向 kubelet 發送如下請求:
POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 請求:
- /pods、/runningpods
- /metrics、/metrics/cadvisor、/metrics/probes
- /spec
- /stats、/stats/container
- /logs
- /run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;
詳情參考:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3
由於關閉了匿名認證,同時開啟了 webhook 授權,所有訪問 10250 端口 https API 的請求都需要被認證和授權。
預定義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 所有 API 的權限:
$ kubectl describe clusterrole system:kubelet-api-admin
Name: system:kubelet-api-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes [] [] [get list watch proxy]
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
kublet api 認證和授權
kublet 配置了如下認證參數:
- authentication.anonymous.enabled:設置為 false,不允許匿名訪問 10250 端口;
- authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTPs 證書認證;
- authentication.webhook.enabled=true:開啟 HTTPs bearer token 認證;
同時配置了如下授權參數:
- authroization.mode=Webhook:開啟 RBAC 授權;
kubelet 收到請求后,使用 clientCAFile 對證書簽名進行認證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized:
$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.86.156:10250/metrics
Unauthorized
$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.27.129.111:10250/metrics
Unauthorized
通過認證后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發送請求,查詢證書或 token 對應的 user、group 是否有操作資源的權限(RBAC);
證書認證和授權:
$ # 權限不足的證書;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.27.129.111:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
$ # 使用部署 kubectl 命令行工具時創建的、具有最高權限的 admin 證書;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://192.168.86.156:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
--cacert、--cert、--key的參數值必須是文件路徑,如上面的./admin.pem不能省略./,否則返回401 Unauthorized;
bear token 認證和授權:
創建一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具有調用 kubelet API 的權限:
kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}
$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.27.129.111:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
cadvisor 和 metrics
cadvisor 統計所在節點各容器的資源(CPU、內存、磁盤、網卡)使用情況,分別在自己的 http web 頁面(4194 端口)和 10250 以 promehteus metrics 的形式輸出。
瀏覽器訪問 http://172.27.129.105:4194/containers/ 可以查看到 cadvisor 的監控頁面:

瀏覽器訪問 https://172.27.129.80:10250/metrics 和 https://172.27.129.80:10250/metrics/cadvisor 分別返回 kublet 和 cadvisor 的 metrics。

注意:
- kublet.config.json 設置 authentication.anonymous.enabled 為 false,不允許匿名證書訪問 10250 的 https 服務;
- 參考A.瀏覽器訪問kube-apiserver安全端口.md,創建和導入相關證書,然后訪問上面的 10250 端口;
獲取 kublet 的配置
從 kube-apiserver 獲取各 node 的配置:
curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://192.168.86.214:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ source /opt/k8s/bin/environment.sh
$ # 使用部署 kubectl 命令行工具時創建的、具有最高權限的 admin 證書;
$ curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem ${KUBE_APISERVER}/api/v1/nodes/docker86-155/proxy/configz | jq \
'.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
{
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "172.27.129.80",
"port": 10250,
"readOnlyPort": 10255,
"authentication": {
"x509": {},
"webhook": {
"enabled": false,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": true
}
},
"authorization": {
"mode": "AlwaysAllow",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local.",
"clusterDNS": [
"10.254.0.2"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/etc/resolv.conf",
"cpuCFSQuota": true,
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": false,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"failSwapOn": true,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"enforceNodeAllocatable": [
"pods"
],
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1"
}
或者參考代碼中的注釋:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go
參考
- kubelet 認證和授權:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done
source /opt/k8s/bin/environment.sh
for node_ip in ${ETCD_NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp /usr/local/bin/pull-google-container root@${node_ip}:/usr/local/bin/
ssh root@${node_ip} "/usr/local/bin/pull-google-container k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0"
done
192.168.86.18 192.168.86.21 192.168.86.91 192.168.86.9
cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster-kubelet-api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kubelet-api-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
EOF
07-3.部署 kube-proxy 組件
kube-proxy 運行在所有 worker 節點上,,它監聽 apiserver 中 service 和 Endpoint 的變化情況,創建路由規則來進行服務負載均衡。
本文檔講解部署 kube-proxy 的部署,使用 ipvs 模式。
下載和分發 kube-proxy 二進制文件
安裝依賴包
各節點需要安裝 ipvsadm 和 ipset 命令,加載 ip_vs 內核模塊。
創建 kube-proxy 證書
創建證書簽名請求:
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
- CN:指定該證書的 User 為
system:kube-proxy; - 預定義的 RoleBinding
system:node-proxier將Usersystem:kube-proxy與 Rolesystem:node-proxier綁定,該 Role 授予了調用kube-apiserverProxy 相關 API 的權限; - 該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
創建和分發 kubeconfig 文件
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
--embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時,寫入的是證書文件路徑);
分發 kubeconfig 文件:
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.kubeconfig k8s@${node_name}:/etc/kubernetes/
done
創建 kube-proxy 配置文件
從 v1.10 開始,kube-proxy 部分參數可以配置文件中配置。可以使用 --write-config-to 選項生成該配置文件,或者參考 kubeproxyconfig 的類型定義源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go
創建 kube-proxy config 文件模板:
cat >kube-proxy.config.yaml.template <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
EOF
bindAddress: 監聽地址;clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;clusterCIDR: kube-proxy 根據--cluster-cidr判斷集群內部和外部流量,指定--cluster-cidr或--masquerade-all選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;hostnameOverride: 參數值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創建任何 ipvs 規則;mode: 使用 ipvs 模式;
為各節點創建和分發 kube-proxy 配置文件:
source /opt/k8s/bin/environment.sh
for (( i=0; i < 7; i++ ))
do
echo ">>> ${NODE_NAMES[i]}"
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy.config.yaml.template > kube-proxy-${NODE_NAMES[i]}.config.yaml
scp kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy.config.yaml
done
替換后的 kube-proxy.config.yaml 文件:kube-proxy.config.yaml
創建和分發 kube-proxy systemd unit 文件
source /opt/k8s/bin/environment.sh
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.config.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
替換后的 unit 文件:kube-proxy.service
分發 kube-proxy systemd unit 文件:
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.service root@${node_name}:/etc/systemd/system/
done
啟動 kube-proxy 服務
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy"
ssh root@${node_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done
- 必須先創建工作和日志目錄;
檢查啟動結果
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"
done
確保狀態為 active (running),否則查看日志,確認原因:
journalctl -u kube-proxy
查看監聽端口和 metrics
[k8s@kube-node1 ~]$ sudo netstat -lnpt|grep kube-prox
tcp 0 0 172.27.129.105:10249 0.0.0.0:* LISTEN 16847/kube-proxy
tcp 0 0 172.27.129.105:10256 0.0.0.0:* LISTEN 16847/kube-proxy
- 10249:http prometheus metrics port;
- 10256:http healthz port;
查看 ipvs 路由規則
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
done
預期輸出:
>>> 172.27.129.105
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr persistent 10800
-> 172.27.129.105:6443 Masq 1 0 0
>>> 172.27.129.111
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr persistent 10800
-> 172.27.129.105:6443 Masq 1 0 0
>>> 172.27.129.112
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr persistent 10800
-> 172.27.129.105:6443 Masq 1 0 0
可見將所有到 kubernetes cluster ip 443 端口的請求都轉發到 kube-apiserver 的 6443 端口;
