13. 第十二篇 二進制安裝kubelet


文章轉載自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483842&idx=1&sn=1ef1cb06ab98e86f9de595e117924db9&chksm=e9fdd436de8a5d20bf0625b61c3f4369d826691340d97caa3258d828cfc4e21302f9b80250c0&cur_album_id=1341273083637989377&scene=189#wechat_redirect

本文是二進制安裝kubernetes v1.17.0的第十二篇,kubelet組件需要運行的所有節點上面(因為我們所有節點都運行Pod,包括Master節點),因為kubelet是運行Pod必須的組件,由它接收kube-apiserver發送的請求,管理此節點Pod及Pod內的容器,並且kubelet新版本內置了cAdvisor,通過它監控容器和節點資源,並定期向Master節點匯報資源使用情況等;

下載https://dl.k8s.io/v1.17.0/kubernetes-node-linux-amd64.tar.gz,二進制安裝包中含有kubernetes/node/bin/kubeadm與kubernetes/node/bin/kubelet等其它文件,本篇我們使用到kubeadm與kubelet,把kubeadm copy到中控機的/data/k8s/bin/目錄,把kubelet分發到所有節點上面/data/k8s/bin/目錄。

創建kubelet引導配置文件

這里kubelet程序啟動時使用引導令牌認證,即Bootstrap Token Auth。

kubelet啟動時查找--kubeletconfig參數對應的文件是否存在,如果不存在則使用--bootstrap-kubeconfig參數指定的kubeconfig 文件,向kube-apiserver發送證書簽名請求(CertificateSigningRequest,簡稱CSR),kube-apiserver收到CSR請求后,對其中的 Token進行認證,當認證請求被approve(批准、贊成)后,並將請求的user設置為system:bootstrap:,請求的group 設置為system:bootstrappers,這一過程稱為Bootstrap Token Auth,並且當CSR請求被批准后,kube-controller-manager會為kubelet創建TLS客戶端證書、私鑰並創建--kubeconfig參數指定的文件中,這個文件即是kubelet訪問kube-apiserver使用的認證文件。

說明:

  1. kube-controller-manager需要配置-–cluster-signing-cert-file-–cluster-signing-key-file參數,才會為TLS Bootstrap 創建證書和私鑰;
  2. 證書和私鑰存放在-cert-dir參數指定的目錄中;

由於所有節點都需要與kube-apiserver通信,即每個節點上面的kubelet作為客戶端,kube-apiserver程序作為服務器,客戶端訪問kube-apiserver需要驗證和認證,並且每個節點都不相同,所以需要為每個節點創建一個kubelet引導配置文件,然后啟動時由kube-controller-manager生成證書和私鑰,並創建kubeconfig配置文件供kubelet訪問kube-apiserver使用。下面為每個節點創建kubelet引導配置文件並分發;

#!/bin/bash

cd /data/k8s/work
source /data/k8s/bin/env.sh

for node_name in ${NODE_NAMES[@]}
do
    echo ">>> ${node_name}"
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
done

# 分發
for node_name in ${NODE_NAMES[@]}
do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done

從上面可知,我們向kubectl引導配置文件中寫入的是Token,而kubeadm token create創建的Token,有效期默認是1天,如果kubelet在有效期內沒有使用它引導的話,將kube-controller-manager清理掉,如果kubelet使用它引導結束后,kube-controller-manager會為kubelet創建各自的client和server證書等。

查看kubeadm創建的token

kubeadm token list --kubeconfig ~/.kube/config

創建user和group的CSR權限

根據上面Bootstrap Token Auth認證得知,kubelet啟動時,kube-apiserver approve后會將請求的user設置為system:bootstrap:,請求group 設置為system:bootstrappers,但前提需要授權,否則啟動kubelet時會失敗;

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

創建配置模板並分發

#!/bin/bash

cd /data/k8s/work
source /data/k8s/bin/env.sh

cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: systemd
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available: "100Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF

# 分發
for node_ip in ${NODE_IPS[@]}
do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
done

配置詳解

創建啟動文件並分發

#!/bin/bash

cd /data/k8s/work
source /data/k8s/bin/env.sh

cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/data/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=docker \\
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --pod-infra-container-image=gcr.azk8s.cn/google_containers/pause-amd64:3.1 \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF

# 分發
for node_name in ${NODE_NAMES[@]}
do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done

配置參數詳解

啟動服務

#!/bin/bash

cd /data/k8s/work
source /data/k8s/bin/env.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done

服務啟動后查看CSR,均處於Pending狀態,需要approve后才可以通過

kubectl get csr

下面創建自動審批機制

創建ClusterRoleBinding自動approve

#!/bin/bash

cd /data/k8s/work
cat > auto-csr-for-kubelet.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

kubectl apply -f auto-csr-for-kubelet.yaml

auto-approve-csrs-for-group 自動approve node的第一次CSR,注意第一次CSR時,請求的Group為system:bootstrappers;
node-client-cert-renewal 自動approve node后續過期的client證書,自動生成的證書Group為system:nodes;
node-server-cert-renewal 自動approve node后續過期的server證書,自動生成的證書Group;
基於安全考慮,CSR approving controllers不會自動approve kubelet server證書簽名請求,需要手動approve

kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

驗證

進程驗證

#!/bin/bash
source /data/k8s/bin/env.sh
for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kubelet.service|grep -i Active"
done

驗證結果:
>>> 192.168.16.104
   Active: active (running) since Fri 2020-01-31 21:59:56 CST; 18h ago
>>> 192.168.16.105
   Active: active (running) since Sun 2019-12-29 19:03:26 CST; 1 months 3 days ago
>>> 192.168.16.106
   Active: active (running) since Sun 2019-12-29 19:03:02 CST; 1 months 3 days ago
>>> 192.168.16.107
   Active: active (running) since Sun 2019-12-29 19:03:04 CST; 1 months 3 days ago

CSR驗證

kubectl get csr

剛才處於pending狀態的CSR均簽發成功。

Node節點狀態

[root@master01 ~]# kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
master01.k8s.vip   Ready    <none> 33d   v1.17.0
master02           Ready    <none> 33d   v1.17.0
master03           Ready    <none> 33d   v1.17.0
node01             Ready    <none> 33d   v1.17.0
[root@master01 ~]#

進程端口

#!/bin/bash
source /data/k8s/bin/env.sh
for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "netstat -antp |grep kubelet"
done

結果如下:
>>> 192.168.16.104
tcp 0      0 192.168.16.104:10248     0.0.0.0:* LISTEN 11217/kubelet
tcp 0      0 192.168.16.104:10250     0.0.0.0:* LISTEN 11217/kubelet
tcp 0      0 127.0.0.1:40471         0.0.0.0:* LISTEN 11217/kubelet
tcp 0      0 192.168.16.104:53896     192.168.16.253:8443      ESTABLISHED 11217/kubelet
>>> 192.168.16.105
tcp 0      0 127.0.0.1:43713         0.0.0.0:* LISTEN 28452/kubelet
tcp 0      0 192.168.16.105:10248     0.0.0.0:* LISTEN 28452/kubelet
tcp 0      0 192.168.16.105:10250     0.0.0.0:* LISTEN 28452/kubelet
tcp 0      0 192.168.16.105:59210     192.168.16.253:8443      ESTABLISHED 28452/kubelet
>>> 192.168.16.106
tcp 0      0 127.0.0.1:36795         0.0.0.0:* LISTEN 31918/kubelet
tcp 0      0 192.168.16.106:10248     0.0.0.0:* LISTEN 31918/kubelet
tcp 0      0 192.168.16.106:10250     0.0.0.0:* LISTEN 31918/kubelet
tcp 0      0 192.168.16.106:59192     192.168.16.253:8443      ESTABLISHED 31918/kubelet
>>> 192.168.16.107
tcp 0      0 192.168.16.107:10248     0.0.0.0:* LISTEN 24467/kubelet
tcp 0      0 192.168.16.107:10250     0.0.0.0:* LISTEN 24467/kubelet
tcp 0      0 127.0.0.1:38895         0.0.0.0:* LISTEN 24467/kubelet
tcp 0      0 192.168.16.107:51182     192.168.16.253:8443      ESTABLISHED 24467/kubelet

10248: healthz http 服務;
10250: https 服務,訪問該端口時需要認證和授權;

總結

由於集群開啟了TLS認證,每個節點上面的kubelet組件都需要使用由kube-apiserver使用的CA簽發的有效證書才能與kube-apiserver 通訊;此時如果節點多起來,需要為每個節點單獨簽署證書將是一件非常繁瑣的事情;TLS bootstrapping功能就是讓kubelet 先使用一個預定的低權限用戶連接到kube-apiserver,發送CSR請求,然后由kube-controller-manager動態簽署;

kubelet組件采用主動的查詢機制,定期向kube-apiserver獲取當前節點應該處理的任務,如果有任務分配到了自己身上(如創建Pod),從而他去處理這些任務;

kubelet暴露了兩個端口10248,http形式的healthz服務,另一個是10250,https服務,其實還有一個只讀的10255端口,這里是禁用的。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM