摘要:
1、Kube-scheduler作為組件運行在master節點,主要任務是把從kube-apiserver中獲取的未被調度的pod通過一系列調度算法找到最適合的node,最終通過向kube-apiserver中寫入Binding對象(其中指定了pod名字和調度后的node名字)來完成調度
2、kube-scheduler與kube-controller-manager一樣,如果高可用,都是采用leader選舉模式。啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩余節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
簡單總結:kube-scheduler負責分配調度Pod到集群內的node節點監聽kube-apiserver,查詢還未分配的Node的Pod根據調度策略為這些Pod分配節點
1)創建kube-scheduler證書簽名請求
kube-scheduler 連接 apiserver 需要使用的證書,同時本身 10259 端口也會使用此證書
[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-scheduler-csr.json { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "localhost", "10.10.0.18", "10.10.0.19", "10.10.0.20" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "system:kube-scheduler", "OU": "System" } ] }
2)生成kube-scheduler證書與私鑰
[root@k8s-master01 ~]# cd /opt/k8s/certs/ [root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/opt/k8s/certs/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler 2019/04/24 16:08:38 [INFO] generate received request 2019/04/24 16:08:38 [INFO] received CSR 2019/04/24 16:08:38 [INFO] generating key: rsa-2048 2019/04/24 16:08:38 [INFO] encoded CSR 2019/04/24 16:08:38 [INFO] signed certificate with serial number 288219277582790216633679349308422764913188390208 2019/04/24 16:08:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
3)查看證書
[root@k8s-master01 certs]# ll kube-scheduler* -rw-r--r-- 1 root root 1131 Apr 24 16:11 kube-scheduler.csr -rw-r--r-- 1 root root 345 Apr 24 16:03 kube-scheduler-csr.json -rw------- 1 root root 1679 Apr 24 16:11 kube-scheduler-key.pem -rw-r--r-- 1 root root 1505 Apr 24 16:11 kube-scheduler.pem
4)分發證書
[root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler-key.pem dest=/etc/kubernetes/ssl/' [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/certs/kube-scheduler.pem dest=/etc/kubernetes/ssl/'
5)生成配置文件kube-scheduler.kubeconfig
1、kube-scheduler 組件開啟安全端口及 RBAC 認證所需配置2、kube-scheduler kubeconfig文件中包含Master地址信息與上一步創建的證書、私鑰
## 設置集群參數 ### [root@k8s-master01 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig Cluster "kubernetes" set. ## 配置客戶端認證參數 [root@k8s-master01 ~]# kubectl config set-credentials "system:kube-scheduler" \ --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \ --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig User "system:kube-scheduler" set. ## 配置上下文參數 [root@k8s-master01 ~]# kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig Context "system:kube-scheduler@kubernetes" created. ## 配置默認上下文 [root@k8s-master01 ~]# kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=kube-scheduler.kubeconfig Switched to context "system:kube-scheduler@kubernetes". ## 配置文件分發 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/root/kube-scheduler.kubeconfig dest=/etc/kubernetes/config/'
6)編輯kube-scheduler核心文件
kube-shceduler 同 kube-controller manager 一樣將不安全端口綁定在本地,安全端口對外公開
[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-scheduler.conf ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--address=127.0.0.1 \ --authentication-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \ --bind-address=0.0.0.0 \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubeconfig=/etc/kubernetes/config/kube-scheduler.kubeconfig \ --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \ --secure-port=10259 \ --leader-elect=true \ --port=10251 \ --tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \ --v=2" ## 分發配置文件 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/cfg/kube-scheduler.conf dest=/etc/kubernetes/config'
7)啟動腳本
需要指定需要加載的配置文件路徑
[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config/kube-scheduler.conf User=kube ExecStart=/usr/local/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ##腳本分發 [root@k8s-master01 ~]# ansible k8s-master -m copy -a 'src=/opt/k8s/unit/kube-scheduler.service dest=/usr/lib/systemd/system/'
8)啟動服務
[root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl daemon-reload' [root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl enable kube-scheduler.service' [root@k8s-master01 ~]# ansible k8s-master -m shell -a 'systemctl start kube-scheduler.service'
9)驗證leader主機
[root@k8s-master01 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master03_e0e29681-666b-11e9-b086-000c2920229d","leaseDurationSeconds":15,"acquireTime":"2019-04-24T08:35:14Z","renewTime":"2019-04-24T08:36:08Z","leaderTransitions":0}' creationTimestamp: "2019-04-24T08:35:14Z" name: kube-scheduler namespace: kube-system resourceVersion: "11238" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: e17d5eee-666b-11e9-bdea-000c2920229d ## master03 為leader主機
10)驗證master集群狀態
在三個節點中,任一主機執行以下命令,都應返回集群狀態信息
[root@k8s-master02 config]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}