前面兩篇文章已經配置好了etcd和flannel的網絡,現在開始配置k8s master集群。
etcd集群配置參考:二進制搭建kubernetes多master集群【一、使用TLS證書搭建etcd集群】
flannel網絡配置參考:二進制搭建kubernetes多master集群【二、配置flannel網絡】
本文在以下主機上操作部署k8s集群
k8s-master1:192.168.80.7
k8s-master2:192.168.80.8
k8s-master3:192.168.80.9
配置Kubernetes master集群
kubernetes master 節點包含的組件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
目前這三個組件需要部署在同一台機器上。
kube-scheduler
、kube-controller-manager
和kube-apiserver
三者的功能緊密相關;- 同時只能有一個
kube-scheduler
、kube-controller-manager
進程處於工作狀態,如果運行多個,則需要通過選舉產生一個 leader;
一、部署kubectl命令工具
kubectl 是 kubernetes 集群的命令行管理工具,本文檔介紹安裝和配置它的步驟。
kubectl 默認從 ~/.kube/config
文件讀取 kube-apiserver 地址、證書、用戶名等信息,如果沒有配置,執行 kubectl 命令時可能會出錯。
~/.kube/config
只需要部署一次,然后拷貝到其他的master。
1、下載kubectl
wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/ cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin
2、創建請求證書
[root@k8s-master1 ssl]# cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "4Paradigm" } ] } EOF
- O 為
system:masters
,kube-apiserver 收到該證書后將請求的 Group 設置為 system:masters; - 預定義的 ClusterRoleBinding
cluster-admin
將 Groupsystem:masters
與 Rolecluster-admin
綁定,該 Role 授予所有 API的權限; - 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段為空;
生成證書和私鑰
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin
3、創建~/.kube/config文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kubectl.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=kubectl.kubeconfig # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin \ --kubeconfig=kubectl.kubeconfig # 設置默認上下文 kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
4、分發~/.kube/config文件
[root@k8s-master1 temp]# cp kubectl.kubeconfig ~/.kube/config [root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master2:~/.kube/config kubectl.kubeconfig 100% 6285 2.2MB/s 00:00 [root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master3:~/.kube/config kubectl.kubeconfig
二、部署api-server
1、創建kube-apiserver的證書簽名請求:
[root@k8s-master1 ssl]# cat > kubernetes-csr.json <<EOF
{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9", "192.168.80.13", "114.67.81.105", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] }
EOF
- hosts 字段指定授權使用該證書的 IP 或域名列表,這里列出了 VIP 、apiserver 節點 IP、kubernetes 服務 IP 和域名;
- 域名最后字符不能是
.
(如不能為kubernetes.default.svc.cluster.local.
),否則解析時失敗,提示:x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
; - 如果使用非
cluster.local
域名,如bqding.com
,則需要修改域名列表中的最后兩個域名為:kubernetes.default.svc.bqding
、kubernetes.default.svc.bqding.com
- 紅色的主機依次為master節點的ip,以及負載均衡器的內網和公網IP。
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2、將生成的證書和私鑰文件拷貝到 master 節點:
[root@k8s-master1 ssl]# cp kubernetes*.pem /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master3:/etc/kubernetes/cert/
3、創建加密配置文件
[root@k8s-master1 ssl]# cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: $(head -c 32 /dev/urandom | base64) - identity: {} EOF
4、分發加密配置文件到master節點
[root@k8s-master1 ssl]# cp encryption-config.yaml /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master3:/etc/kubernetes/cert/
5、創建kube-apiserver systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \ --advertise-address=192.168.80.7 \ --bind-address=192.168.80.7 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.254.0.0/16 \ --service-node-port-range=30000-32700 \ --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/cert/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \ --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \ --etcd-cafile=/etc/kubernetes/cert/ca.pem \ --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \ --etcd-servers=https://192.168.80.4:2379,https://192.168.80.5:2379,https://192.168.80.6:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.targe EOF
--experimental-encryption-provider-config
:啟用加密特性;--authorization-mode=Node,RBAC
: 開啟 Node 和 RBAC 授權模式,拒絕未授權的請求;--enable-admission-plugins
:啟用ServiceAccount
和NodeRestriction
;--service-account-key-file
:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的--service-account-private-key-file
指定私鑰文件,兩者配對使用;--tls-*-file
:指定 apiserver 使用的證書、私鑰和 CA 文件。--client-ca-file
用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;--kubelet-client-certificate
、--kubelet-client-key
:如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應的用戶(上面 kubernetes*.pem 證書的用戶為 kubernetes) 用戶定義 RBAC 規則,否則訪問 kubelet API 時提示未授權;--bind-address
: 不能為127.0.0.1
,否則外界不能訪問它的安全端口 6443;--insecure-port=0
:關閉監聽非安全端口(8080);--service-cluster-ip-range
: 指定 Service Cluster IP 地址段;--service-node-port-range
: 指定 NodePort 的端口范圍;--runtime-config=api/all=true
: 啟用所有版本的 APIs,如 autoscaling/v2alpha1;--enable-bootstrap-token-auth
:啟用 kubelet bootstrap 的 token 認證;--apiserver-count=3
:指定集群運行模式,多台 kube-apiserver 會通過 leader 選舉產生一個工作節點,其它節點處於阻塞狀態;- 紅色部分為各個master主機部分
6、分發kube-apiserver.service文件到其他master
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service
7、創建日志目錄
mkdir -p /var/log/kubernetes
8、啟動api-server服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-apiserver [root@k8s-master1 ssl]# systemctl start kube-apiserver
9、檢查api-server和集群狀態
[root@k8s-master1 ssl]# netstat -ptln | grep kube-apiserve tcp 0 0 192.168.80.9:6443 0.0.0.0:* LISTEN 22348/kube-apiserve [root@k8s-master1 ssl]#kubectl cluster-info Kubernetes master is running at https://114.67.81.105:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
10、授予kubernetes證書訪問kubelet api權限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
三、部署kube-controller-manager
為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:
- 與 kube-apiserver 的安全端口通信時;
- 在安全端口(https,10252) 輸出 prometheus 格式的 metrics;
1、創建kube-controller-manager證書請求:
[root@k8s-master1 ssl]# cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "4Paradigm" } ] } EOF
- hosts 列表包含所有 kube-controller-manager 節點 IP;
- CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限。
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2、將生成的證書和私鑰分發到所有 master 節點
[root@k8s-master1 ssl]# cp kube-controller-manager*.pem /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/cert/
3、創建和分發kubeconfig文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分發 kube-controller-manager.kubeconfig 到所有 master 節點
[root@k8s-master1 ssl]# cp kube-controller-manager.kubeconfig /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master3:/etc/kubernetes/cert/
4、創建和分發kube-controller-manager systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \ --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \ --experimental-cluster-signing-duration=8760h \ --root-ca-file=/etc/kubernetes/cert/ca.pem \ --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
--port=0
:關閉監聽 http /metrics 的請求,同時--address
參數無效,--bind-address
參數有效;--secure-port=10252
、--bind-address=0.0.0.0
: 在所有網絡接口監聽 10252 端口的 https /metrics 請求;- --address:指定監聽的地址為127.0.0.1
--kubeconfig
:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;--cluster-signing-*-file
:簽名 TLS Bootstrap 創建的證書;--experimental-cluster-signing-duration
:指定 TLS Bootstrap 證書的有效期;--root-ca-file
:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;--service-account-private-key-file
:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的--service-account-key-file
指定的公鑰文件配對使用;--service-cluster-ip-range
:指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;--leader-elect=true
:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;--feature-gates=RotateKubeletServerCertificate=true
:開啟 kublet server 證書的自動更新特性;--controllers=*,bootstrapsigner,tokencleaner
:啟用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;--horizontal-pod-autoscaler-*
:custom metrics 相關參數,支持 autoscaling/v2alpha1;--tls-cert-file
、--tls-private-key-file
:使用 https 輸出 metrics 時使用的 Server 證書和秘鑰;--use-service-account-credentials=true
:
分發kube-controller-manager systemd unit文件
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/kube-controller-manager.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/kube-controller-manager.service
5、啟動kube-controller-manager服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-controller-manager [root@k8s-master1 ssl]# systemctl start kube-controller-manager
6、檢查kube-controller-manager服務
[root@k8s-master1 ssl]# netstat -lnpt|grep kube-controll tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 17906/kube-controll tcp6 0 0 :::10257 :::* LISTEN 17906/kube-controll
7、查看當前kube-controller-manager的leader
[root@k8s-master1 ssl]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_d19698f1-0379-11e9-9c06-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:40:15Z","renewTime":"2018-12-19T11:12:43Z","leaderTransitions":5}' creationTimestamp: 2018-12-19T08:53:45Z name: kube-controller-manager namespace: kube-system resourceVersion: "9860" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 97ef4bad-036b-11e9-90aa-fa163e5caede
可見,當前的 leader 為 kube-master3 節點。
四、部署kube-scheduler
該集群包含 3 個節點,啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩余節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:
- 與 kube-apiserver 的安全端口通信;
- 在安全端口(https,10251) 輸出 prometheus 格式的 metrics;
1、創建kube-scheduler證書請求
[root@k8s-master1 ssl]# cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "4Paradigm" } ] }
EOF
- hosts 列表包含所有 kube-scheduler 節點 IP;
- CN 為 system:kube-scheduler、O 為 system:kube-scheduler,kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限。
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2、創建和分發kube-scheduler.kubeconfig文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
- 上一步創建的證書、私鑰以及 kube-apiserver 地址被寫入到 kubeconfig 文件中;
分發 kubeconfig 到所有 master 節點:
[root@k8s-master1 ssl]# cp kube-scheduler.kubeconfig /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master3:/etc/kubernetes/cert/
3、創建和分發kube-scheduler systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \ --leader-elect=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
EOF
--address
:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;--kubeconfig
:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;--leader-elect=true
:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;
分發 systemd unit 文件到所有 master 節點:
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/kube-scheduler.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/kube-scheduler.service
4、啟動kube-scheduler服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-scheduler [root@k8s-master1 ssl]# systemctl start kube-scheduler
5、查看kube-scheduler運行監聽端口
[root@k8s-master1 ssl]# netstat -lnpt|grep kube-sche tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 17921/kube-schedule
6、查看當前kube-scheduler的leader
[root@k8s-master1 ssl]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_d41f4473-0379-11e9-a19b-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:38:27Z","renewTime":"2018-12-19T11:14:06Z","leaderTransitions":2}' creationTimestamp: 2018-12-19T09:10:56Z name: kube-scheduler namespace: kube-system resourceVersion: "9961" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: fe267870-036d-11e9-90aa-fa163e5caede
可見,當前的 leader 為 kube-master1 節點。
七、在所有master節點上驗證功能是否正常
[root@k8s-master1 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
八、Haproxy+keepalived配置k8s master高可用(每台master都進行操作,紅色字體改成對應主機的即可)
- keepalived 提供 kube-apiserver 對外服務的 VIP;
- haproxy 監聽 VIP,后端連接所有 kube-apiserver 實例,提供健康檢查和負載均衡功能;
運行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
本文檔復用 master 節點的三台機器,haproxy 監聽的端口(8443) 需要與 kube-apiserver 的端口 6443 不同,避免沖突。
keepalived 在運行過程中周期檢查本機的 haproxy 進程狀態,如果檢測到 haproxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和 haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。
1、安裝haproxy和keepalived
yum install -y keepalived haproxy
2、三個master配置haproxy代理api-server服務
[root@k8s-master1 ~]# cat /etc/haproxy/haproxy.cfg global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /var/run/haproxy-admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults log global timeout connect 5000 timeout client 10m timeout server 10m listen admin_stats bind 0.0.0.0:10080 mode http log 127.0.0.1 local0 err stats refresh 30s stats uri /status stats realm welcome login\ Haproxy stats auth admin:123456 stats hide-version stats admin if TRUE listen kube-master bind 0.0.0.0:8443 mode tcp option tcplog balance roundrobin server 192.168.80.7 192.168.80.7:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.80.8 192.168.80.8:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.80.9 192.168.80.9:6443 check inter 2000 fall 2 rise 2 weight 1
- haproxy 在 10080 端口輸出 status 信息;
- haproxy 監聽所有接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致;
- server 字段列出所有 kube-apiserver 監聽的 IP 和端口;
3、三個master配置keepalived服務
[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf global_defs { router_id lb-master-105 } vrrp_script check-haproxy { script "killall -0 haproxy" interval 3 } vrrp_instance VI-kube-master { state BACKUP nopreempt #設置不搶占,必須設置在backup上且priority最高的節點上 priority 120 dont_track_primary interface ens192 virtual_router_id 68 advert_int 3 track_script { check-haproxy } virtual_ipaddress { 114.67.81.105 #VIP,訪問此IP調用api-server } }
- 使用
killall -0 haproxy
命令檢查所在節點的 haproxy 進程是否正常。 - router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套 keepalived HA,則必須各不相同;
- 其他2個backup把nopreempt去掉,及priority分別設置110和100即可。
4、啟動haproxy和keepalived服務
#haproxy
systemctl enable haproxy
systemctl start haproxy
#keepalive
systemctl enable keepalived
systemctl start keepalived
5、查看haproxy和keepalived服務狀態以及VIP情況
systemctl status haproxy|grep Active
systemctl status keepalived|grep Active
如果Active: active (running)表示正常。
6、查看VIP所屬情況
ip addr show | grep 114.67.81.105
我這里VIP在192.168.80.7上。
為了驗證高可用配置成功否,可以把192.168.80.7上的haproxy服務關閉,此時VIP會漂移到192.168.80.8服務器上,當192.168.80.7解決問題重啟后,由於它配置了nopreempt,所以它不會重新搶占VIP資源。
注:* 如果使用雲搭建的集群,在高可用這塊可以直接用雲服務商提供的SLB服務,如果haproxy+keepalive可能不支持,原因你懂的。(雲底層封掉了)
下一篇我們將進行node節點的部署,請參考:二進制搭建kubernetes多master集群【四、配置k8s node】