高可用Kubernetes集群-7. 部署kube-controller-manager


 九.部署kube-controller-manager

kube-controller-manager是Kube-Master相關的3個服務之一,是有狀態的服務,會修改集群的狀態信息。

如果多個master節點上的相關服務同時生效,則會有同步與一致性問題,所以多master節點中的kube-controller-manager服務只能是主備的關系,kukubernetes采用租賃鎖(lease-lock)實現leader的選舉,具體到kube-controller-manager,設置啟動參數"--leader-elect=true"。

1. 創建kube-controller-manager證書

1)創建kube-conftroller-manager證書簽名請求

# kube-controller-mamager與kubei-apiserver通信采用雙向TLS認證;
# kube-apiserver提取CN作為客戶端的用戶名,即system:kube-controller-manager。 kube-apiserver預定義的 RBAC使用的ClusterRoleBindings system:kube-controller-manager將用戶system:kube-controller-manager與ClusterRole system:kube-controller-manager綁定
[root@kubenode1 ~]# mkdir -p /etc/kubernetes/controller-manager
[root@kubenode1 ~]# cd /etc/kubernetes/controller-manager/
[root@kubenode1 controller-manager]# touch controller-manager-csr.json
[root@kubenode1 controller-manager]# vim controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "hosts": [
      "172.30.200.21",
      "172.30.200.22",
      "172.30.200.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "ChengDu",
            "L": "ChengDu",
            "O": "system:kube-controller-manager",
            "OU": "cloudteam"
        }
    ]
}

2)生成kube-controller-mamager證書與私鑰

[root@kubenode1 controller-manager]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager

# 分發controller-manager.pem,controller-manager-key.pem
[root@kubenode1 controller-manager]# scp controller-manager*.pem root@172.30.200.22:/etc/kubernetes/controller-manager/
[root@kubenode1 controller-manager]# scp controller-manager*.pem root@172.30.200.23:/etc/kubernetes/controller-manager/

2. 創建kube-controller-manager kubeconfig文件

kube-controller-manager kubeconfig文件中包含Master地址信息與必要的認證信息。 

# 配置集群參數;
# --server:指定api-server,采用ha之后的vip;
# cluster名自定義,設定之后需保持一致;
# --kubeconfig:指定kubeconfig文件路徑與文件名;如果不設置,默認生成在~/.kube/config文件
[root@kubenode1 controller-manager]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.30.200.10:6443 \
--kubeconfig=controller-manager.conf

# 配置客戶端認證參數;
# 認證用戶為前文簽名中的“system:kube-controller-manager”;
# 指定對應的公鑰證書/私鑰等
[root@kubenode1 controller-manager]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/controller-manager/controller-manager.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/controller-manager/controller-manager-key.pem \
--kubeconfig=controller-manager.conf

# 配置上下文參數
[root@kubenode1 controller-manager]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=controller-manager.conf

# 配置默認上下文
[root@kubenode1 controller-manager]# kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=controller-manager.conf

# 分發controller-manager.conf文件到所有master節點;
[root@kubenode1 controller-manager]# scp controller-manager.conf root@172.30.200.22:/etc/kubernetes/controller-manager/
[root@kubenode1 controller-manager]# scp controller-manager.conf root@172.30.200.23:/etc/kubernetes/controller-manager/

3. 配置kube-controller-manager的systemd unit文件

相關可執行文件在部署kubectl時已部署完成。

# kube-controller-manager在kube-apiserver啟動之后啟動
[root@kubenode1 ~]# touch /usr/lib/systemd/system/kube-controller-manager.service
[root@kubenode1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service

[Service]
EnvironmentFile=/usr/local/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

# 啟動參數文件
# --kubeconfig:kubeconfig配置文件路徑,配置文件中包含master地址信息與必要的認證信息;
# --allocate-node:設置為true時,使用雲服務商為Pod分配的cidrs,一般僅用在公有雲;
# --cluster-name:集群名稱,默認即kubernetes;
# --cluster-signing-cert-file / --cluster-signing-key-file:用於集群范圍的認證;
# --service-account-private-key-file:用於service account token簽名的私鑰文件路徑;
# --root-ca-file:根ca證書路徑,被用於service account 的token secret中
# --insecure-experimental-approve-all-kubelet-csrs-for-group:controller-manager自動授權kubelet客戶端證書csr組
# --use-service-account-credentials:設置為true時,表示為每個controller分別設置service account;
# --controllers:啟動的contrller列表,默認為”*”,啟用所有的controller,但不包含” bootstrapsigner”與”tokencleaner”;
# --leader-elect:設置為true時進行leader選舉,集群高可用部署時controller-manager必須選舉leader,默認即true
[root@kubenode1 ~]# touch /usr/local/kubernetes/kube-controller-manager.conf
[root@kubenode1 ~]# vim /usr/local/kubernetes/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="--master=https://172.30.200.10:6443 \
  --kubeconfig=/etc/kubernetes/controller-manager/controller-manager.conf \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=169.169.0.0/16 \
  --cluster-cidr=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --insecure-experimental-approve-all-kubelet-csrs-for-group=system:bootstrappers \
  --use-service-account-credentials=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --leader-elect=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes/controller-manager \
  --v=2  1>>/var/log/kubernetes/kube-controller-manager.log 2>&1"

# 創建日志目錄
[root@kubenode1 ~]# mkdir -p /var/log/kubernetes/controller-manager 

4. 啟動並驗證

1)kube-conftroller-manager狀態驗證

[root@kubenode1 ~]# systemctl daemon-reload
[root@kubenode1 ~]# systemctl enable kube-controller-manager
[root@kubenode1 ~]# systemctl start kube-controller-manager
[root@kubenode1 ~]# systemctl status kube-controller-manager

2)kube-conftroller-manager選舉查看

# 因kubenode1是第一個啟動kube-controller-manager服務的節點,嘗試獲取leader權限,成功
[root@kubenode1 ~]# cat /var/log/kubernetes/controller-manager/kube-controller-manager.INFO | grep "leaderelection"

# 在kubenode2上觀察,kubenode2在嘗試獲取leader權限,但未成功,后續操作掛起
[root@kubenode2 ~]# tailf /var/log/kubernetes/controller-manager/kube-controller-manager.INFO


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM