5.基於二進制部署kubernetes(k8s)集群


1 kubernetes組件

1.1 Kubernetes 集群圖

官網集群架構圖
image

1.2 組件及功能

1.2.1 控制組件(Control Plane Components)

控制組件對集群做出全局決策(例如,調度),以及檢測和響應集群事件.
例如,當檢測到一個deployment的replicas字段不滿足設定值時就會啟動一個新的pod.

kube-apiserver

k8s API Server提供了k8s各類資源對象(pod,RC,Service等)的增刪改查及watch等HTTP Rest接口,是整個系統的數據總線和數據中心。
kubernetes API Server的功能:

  • 提供了集群管理的REST API接口(包括認證授權、數據校驗以及集群狀態變更);
  • 提供其他模塊之間的數據交互和通信的樞紐(其他模塊通過API Server查詢或修改數據,只有API Server才直接操作etcd);
  • 是資源配額控制的入口;
  • 擁有完備的集群安全機制.

etcd

etcd 是兼具一致性和高可用性的鍵值數據庫,可以作為保存 Kubernetes 所有集群數據的后台數據庫。

kube-scheduler

從集群所有節點中,根據調度算法挑選出所有可以運行該pod的節點,再根據調度算法從上述node節點選擇最優節點作為最終結果。
調度器運行在master節點,它的核心功能是監聽apiserver來獲取PodSpec.NodeName為空的pod,然后為pod創建一個binding指示pod應該調度到哪個節點上,調度結果寫入apiserver。

kube-controller-manager

作為集群內部的管理控制中心,負責集群內的Node、Pod副本、服務端點(Endpoint)、命名空間(Namespace)、服務賬號(ServiceAccount)、資源定額(ResourceQuota)的管理,當某個Node意外宕機時,Controller Manager會及時發現並執行自動化修復流程,確保集群始終處於預期的工作狀態。

cloud-controller-manager

是kubernetes與雲廠商提供的服務能力對接的關鍵組件。又稱kubernetes cloudprovider. 通過這個組件,可以讓用戶在創建k8s LoadBalancer 類型的service的時候自動的為用戶創建一個阿里雲SLB,同時動態的綁定與解綁SLB后端,並且提供了豐富的配置允許用戶自定義生成的LoadBalancer.
如果在自己的環境中運行 Kubernetes,或者在本地計算機中運行學習環境, 所部署的環境中不需要此組件。

1.2.2 節點組件(Node Components)

節點組件在每個節點上運行,維護運行的 Pod 並提供 Kubernetes 運行環境。

kubelet

一個在集群中每個節點(node)上運行的代理。 它保證容器(containers)都 運行在 Pod 中。
kubelet 接收一組通過各類機制提供給它的 PodSpecs,確保這些 PodSpecs 中描述的容器處於運行狀態且健康。 kubelet 不會管理不是由 Kubernetes 創建的容器。

kube-proxy

kube-proxy 是集群中每個節點上運行的網絡代理, 實現 Kubernetes 服務(Service) 概念的一部分。
kube-proxy 維護節點上的網絡規則。這些網絡規則允許從集群內部或外部的網絡會話與 Pod 進行網絡通信。
如果操作系統提供了數據包過濾層並可用的話,kube-proxy 會通過它來實現網絡規則。否則, kube-proxy 僅轉發流量本身。

Container runtime

容器運行環境是負責運行容器的軟件。
Kubernetes 支持多個容器運行環境: Docker、 containerd、CRI-O 以及任何實現 Kubernetes CRI (容器運行環境接口)。

1.2.3 插件(Addons)

插件使用 Kubernetes 資源(DaemonSet、 Deployment等)實現集群功能。 因為這些插件提供集群級別的功能,插件中命名空間域的資源屬於 kube-system 命名空間。

DNS

盡管其他插件都並非嚴格意義上的必需組件,但幾乎所有 Kubernetes 集群都應該 有集群 DNS, 因為很多示例都需要 DNS 服務。
集群 DNS 是一個 DNS 服務器,和環境中的其他 DNS 服務器一起工作,它為 Kubernetes 服務提供 DNS 記錄。
Kubernetes 啟動的容器自動將此 DNS 服務器包含在其 DNS 搜索列表中。
比如:core-dns

網絡用戶界面(Dashboard)

Dashboard 是 Kubernetes 集群的通用的、基於 Web 的用戶界面。可以提供簡單的集群管理配置和集群運行狀態監控查看。

容器資源監控

容器資源監控 將關於容器的一些常見的時間序列度量值保存到一個集中的數據庫中,並提供用於瀏覽這些數據的界面。

集群級日志記錄

集群層面日志 機制負責將容器的日志數據 保存到一個集中的日志存儲中,該存儲能夠提供搜索和瀏覽接口。

2 kubernetes 創建Pod 的工作流

  1. kubectl 向 k8s api server 發起一個create pod 請求(即我們使用Kubectl敲一個create pod命令) 。

  2. k8s api server接收到pod創建請求后,不會去直接創建pod;
    而是生成一個包含創建信息的yaml。

  3. apiserver 將剛才的yaml信息寫入etcd數據庫。
    到此為止僅僅是在etcd中添加了一條記錄, 還沒有任何的實質性進展。

  4. scheduler 查看 k8s api ,類似於通知機制。
    首先判斷:pod.spec.Node == null?若為null,表示這個Pod請求是新來的,需要創建;
    因此先進行調度計算,找到最“閑”的node。
    然后將信息在etcd數據庫中更新分配結果:pod.spec.Node = nodeA (設置一個具體的節點)
    同樣上述操作的各種信息也要寫到etcd數據庫中中。

  5. kubelet 通過監測etcd數據庫(即不停地看etcd中的記錄),發現 k8s api server 中有了個新的Node;
    如果這條記錄中的Node與自己的編號相同(即這個Pod由scheduler分配給自己了);
    則調用node中的docker api,創建container。

    引用自:https://www.cnblogs.com/chaojiyingxiong/p/14146431.html

    另外關於"kube-scheduler原理介紹及分析" 這篇寫的非常好,在此一並記錄以備后用:https://blog.csdn.net/li_101357/article/details/89980217

3 基於二進制部署kubernetes集群

以下部署使用kubeasz工具並參考其部署文檔完成。其項目托管與git https://github.com/easzlab/kubeasz

3.1 服務器規划及初始化配置

3.1.1 集群規划

角色 數量 描述
部署節點 1 運行ansible/ezctl命令
master節點 3 注意etcd集群需要1,3,5,...奇數個節點,一般復用master節點
node節點 3 高可用集群至少2個master節點
etcd節點 3 運行應用負載的節點,可根據需要提升機器配置/增加節點數

3.1.2 服務器規划

hostname IP
192.168.2.10 k8s-deploy
192.168.2.11 k8s-master1
192.168.2.12 k8s-master2
192.168.2.13 k8s-master3
192.168.2.14 k8s-etcd1
192.168.2.15 k8s-etcd2
192.168.2.16 k8s-etcd3
192.168.2.17 k8s-node1
192.168.2.18 k8s-node2
192.168.2.19 k8s-node3

3.1.3 硬件配置

master節點:4c/8g內存/100g硬盤
worker節點:建議8c/16g內存/100g硬盤
注意:默認配置下容器/kubelet會占用/var的磁盤空間,如果磁盤分區特殊,可以設置config.yml中的容器/kubelet數據目錄:CONTAINERD_STORAGE_DIR DOCKER_STORAGE_DIR KUBELET_ROOT_DIR

3.1.4 deploy節點生成ssh密鑰及分發至其他節點,使deploy節點可免密ssh登錄到其他節點。

root@k8-deploy:~# apt install sshpass -y
root@k8s-deploy:~# for i in `seq 11 19`;do sshpass -p yan123.. ssh-copy-id 192.168.2.$i  -o StrictHostKeyChecking=no; done

3.1.5 安裝pip3,使用pip3 安裝ansible

root@k8s-deploy:~# apt install python3-pip -y
root@k8s-deploy:~# pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/

3.2 kubeasz工具配置

3.2.1 kubeasz工具ezdown腳本下載

root@k8s-deploy:~# export release=3.1.0
root@k8s-deploy:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s-deploy:~# chmod +x ./ezdown

3.2.2 ezdown腳本中修改docker和k8s版本

root@k8s-deploy:~# vim ezdown
DOCKER_VER=19.03.15
K8S_BIN_VER=v1.21.0

3.2.3 下載項目源碼、二進制及離線鏡像

root@k8-deploy:~# ./ezdown -D

3.3 集群安裝

3.3.1 創建集群配置實例

root@k8-deploy:~# cd /etc/kubeasz/
root@k8-deploy:/etc/kubeasz# ./ezctl new k8s-fx01
2021-09-18 15:37:02 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-fx01
2021-09-18 15:37:02 DEBUG set version of common plugins
2021-09-18 15:37:03 DEBUG cluster k8s-fx01: files successfully created.
2021-09-18 15:37:03 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-fx01/hosts'
2021-09-18 15:37:03 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-fx01/config.yml'

3.3.2 修改集群配置文件

集群創建后會在/etc/kubeasz/clusters/創建以集群名命名的目錄,里面有2個配置文件。
hosts文件修改的內容如下:其中master和node節點暫時各保留1台不部署,為后續測試單獨增節點。

[etcd]
192.168.2.14
192.168.2.15
192.168.2.16

# master node(s)
[kube_master]
192.168.2.11
192.168.2.12

# work node(s)
[kube_node]
192.168.2.17
192.168.2.18

# 192.168.1.8 192.168.1.170為2個harbor的IP,192.168.1.110為harbor的代理vip
[ex_lb]
192.168.1.8 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.110 EX_APISERVER_PORT=8443
192.168.1.170 LB_ROLE=master EX_APISERVER_VIP=192.168.1.110 EX_APISERVER_PORT=8443

CLUSTER_NETWORK="calico"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.0.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.100.0.0/16"

NODE_PORT_RANGE="30000-65000"

CLUSTER_DNS_DOMAIN="fx.local"

bin_dir="/usr/local/bin"

config.yml文件修改的內容如下

# [containerd]基礎容器鏡像
SANDBOX_IMAGE: "192.168.1.110/k8s/easzlab-pause-amd64:3.4.1"

# [docker]信任的HTTP倉庫
INSECURE_REG: '["127.0.0.1/8","192.168.1.110"]'


# node節點最大pod 數
MAX_PODS: 300

# coredns 自動安裝
dns_install: "no"
ENABLE_LOCAL_DNS_CACHE: false

# metric server 自動安裝
metricsserver_install: "no"

# dashboard 自動安裝
dashboard_install: "no"

配置文件中需要從官方網站下載的easzlab-pause-amd64鏡像可以先下載好傳到本地內網的harbor上,並在配置文件中修改為本地harbor的地址

docker pull easzlab/pause-amd64:3.4.1
docker tag easzlab/pause-amd64:3.4.1  192.168.1.110/k8s/easzlab-pause-amd64:3.4.1
docker push 192.168.1.110/k8s/easzlab-pause-amd64:3.4.1

3.3.3開始分步驟安裝

查看 ezctl 分布安裝說明

root@k8-deploy:/etc/kubeasz# ./ezctl help setup
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings 
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master

修改01.prepare.yml配置文件

vim playbooks/01.prepare.yml
刪除
  - ex_lb
  - chrony

01-創建證書和環境准備

本步驟主要完成:

  • (optional) role:os-harden,可選系統加固,符合linux安全基線,詳見upstream
  • (optional) role:chrony,可選集群節點時間同步
  • role:deploy,創建CA證書、集群組件訪問apiserver所需的各種kubeconfig
  • role:prepare,系統基礎環境配置、分發CA證書、kubectl客戶端安裝
root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 01

02-安裝etcd集群

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 02

etcd安裝完成后可以查看狀態是否正常

root@k8-etcd1:~# for i in `seq 14 16`;do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://192.168.2.${i}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done
https://192.168.2.14:2379 is healthy: successfully committed proposal: took = 23.942932ms
https://192.168.2.15:2379 is healthy: successfully committed proposal: took = 38.030463ms
https://192.168.2.16:2379 is healthy: successfully committed proposal: took = 25.813005ms

03-安裝容器運行時(docker)

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 03

04-安裝kube_master節點

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 04

執行 kubectl get componentstatus 驗證 master節點的主要組件:

root@k8-deploy:/etc/kubeasz# kubectl get componentstatus 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   

05-安裝kube_node節點

kube_node 是集群中運行工作負載的節點,前置條件需要先部署好kube_master節點,它需要部署如下組件:

  • kubelet: kube_node上最主要的組件
  • kube-proxy: 發布應用服務與負載均衡
  • haproxy:用於請求轉發到多個 apiserver,詳見HA-2x 架構
  • calico: 配置容器網絡 (或者其他網絡組件)

修改配置文件,增加kube-proxy 代理模式ipvs

vim /etc/kubeasz/roles/kube-node/templates/kube-proxy-config.yaml.j2
...
mode: "{{ PROXY_MODE }}"
ipvs:
  scheduler: rr

安裝kube_node節點

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 04

驗證node狀態:

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME           STATUS                     ROLES    AGE   VERSION
192.168.2.11   Ready,SchedulingDisabled   master   58m   v1.21.0
192.168.2.12   Ready,SchedulingDisabled   master   58m   v1.21.0
192.168.2.17   Ready                      node     43s   v1.21.0
192.168.2.18   Ready                      node     43s   v1.21.0

06-安裝網絡組件

將網絡組件需要的鏡像先手動下載重新打tag並上傳到本地內網harbor

root@k8s-deploy:/etc/kubeasz# docker pull calico/cni:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/cni:v3.15.3 192.168.1.110/k8s/calico-cni:v3.15.3
docer push 192.168.1.110/k8s/calico-cni:v3.15.3

root@k8s-deploy:/etc/kubeasz# docker pull calico/pod2daemon-flexvol:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/pod2daemon-flexvol:v3.15.3 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3

root@k8s-deploy:/etc/kubeasz# docker pull calico/node:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/node:v3.15.3  192.168.1.110/k8s/calico-node:v3.15.3   
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-node:v3.15.3 

root@k8s-deploy:/etc/kubeasz# docker pull calico/kube-controllers:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/kube-controllers:v3.15.3 192.168.1.110/k8s/calico-kube-controllers:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-kube-controllers:v3.15.3

修改配置文件,將鏡像地址替換為本地內網harbor地址,修改后如下:

root@k8-deploy:/etc/kubeasz# grep image roles/calico/templates/calico-v3.15.yaml.j2 -n
212:          image: 192.168.1.110/k8s/calico-cni:v3.15.3
251:          image: 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3
262:          image: 192.168.1.110/k8s/calico-node:v3.15.3
488:          image: 192.168.1.110/k8s/calico-kube-controllers:v3.15.3

執行安裝

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 05

安裝完成后驗證:

root@k8-node1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 192.168.2.11 | node-to-node mesh | up    | 10:39:20 | Established |
| 192.168.2.18 | node-to-node mesh | up    | 10:39:23 | Established |
| 192.168.2.12 | node-to-node mesh | up    | 10:39:25 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

手動創建三個pod進行網絡測試

root@k8-deploy:~# kubectl run net-test1 --image 192.168.1.110/test/alpine:v1 sleep 30000 
pod/net-test1 created
root@k8-deploy:~# kubectl run net-test2 --image 192.168.1.110/test/alpine:v1 sleep 30000  
pod/net-test2 created
root@k8-deploy:~# kubectl run net-test3 --image 192.168.1.110/test/alpine:v1 sleep 30000  
pod/net-test3 created

root@k8-deploy:~# kubectl get pod -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
default       net-test1                                  1/1     Running   0          25s     10.100.224.65    192.168.2.18   <none>           <none>
default       net-test2                                  1/1     Running   0          18s     10.100.172.193   192.168.2.17   <none>           <none>
default       net-test3                                  1/1     Running   0          14s     10.100.224.66    192.168.2.18   <none>           <none>
kube-system   calico-kube-controllers-85f8dc6778-4cdk4   1/1     Running   0          3d19h   192.168.2.17     192.168.2.17   <none>           <none>
kube-system   calico-node-6zb7v                          1/1     Running   0          3d19h   192.168.2.18     192.168.2.18   <none>           <none>
kube-system   calico-node-ffmv2                          1/1     Running   0          3d19h   192.168.2.11     192.168.2.11   <none>           <none>
kube-system   calico-node-m4npt                          1/1     Running   0          3d19h   192.168.2.12     192.168.2.12   <none>           <none>
kube-system   calico-node-qx9lf                          1/1     Running   0          3d19h   192.168.2.17     192.168.2.17   <none>           <none>

root@k8-deploy:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.100.172.193
PING 10.100.172.193 (10.100.172.193): 56 data bytes
64 bytes from 10.100.172.193: seq=0 ttl=62 time=1.447 ms
64 bytes from 10.100.172.193: seq=1 ttl=62 time=1.234 ms
^C
--- 10.100.172.193 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.234/1.340/1.447 ms
/ # ping 10.100.224.66
PING 10.100.224.66 (10.100.224.66): 56 data bytes
64 bytes from 10.100.224.66: seq=0 ttl=63 time=0.310 ms
64 bytes from 10.100.224.66: seq=1 ttl=63 time=0.258 ms
^C
--- 10.100.224.66 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.258/0.284/0.310 ms

安裝coredns

手動下載coredns鏡像

root@k8-deploy:~# docker pull coredns/coredns:1.8.3

如果下載失敗,可以通過其他方式先把鏡像保存成文件,再導入集群服務器。
1.8.3 版本下載地址:https://hub.docker.com/layers/coredns/coredns/1.8.3/images/sha256-95552cb6e83c78034bf6112a8e014932fb58e617aacf602997b10e80228fd697?context=explore

root@k8-deploy:~# docker load -i coredns-image-v1.8.3.tar.gz 
85c53e1bd74e: Loading layer [==================================================>]  43.29MB/43.29MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.3

再把官方鏡像打tag,上傳到本地鏡像harbor

root@k8s-deploy:~/k8s# docker tag k8s.gcr.io/coredns/coredns:v1.8.3 192.168.1.110/k8s/coredns:v1.8.3
root@k8s-deploy:~/k8s# docker push 192.168.1.110/k8s/coredns-coredns:v1.8.3

准備coredns.yml文件

root@k8-deploy:~# wget https://dl.k8s.io/v1.21.4/kubernetes.tar.gz

root@k8-deploy:~# tar xf kubernetes.tar.gz

root@k8-deploy:~# cd /kubernetes/cluster/addons/dns/coredns

root@k8-deploy:~/kubernetes/cluster/addons/dns/coredns# cp coredns.yaml.base coredns.yaml

修改coredns.yml文件以下配置:

63         kubernetes fx.local in-addr.arpa ip6.arpa {
67         forward . 223.5.5.5 {
120         image: 192.168.1.110/k8s/coredns:v1.8.3
124             memory: 256Mi
187   type: NodePort
201     targetPort: 9153
202     nodePort: 30009

安裝coredns

root@k8-deploy:~# kubectl apply -f coredns.yaml       
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

root@k8-deploy:~# kubectl get pod -A -o wide
kube-system   coredns-778bbd987f-g42q8                   1/1     Running   0          2m26s   10.100.172.194   192.168.2.17   <none>           <none>

進入pod進行域名解析測試:

root@k8-deploy:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com
PING www.baidu.com (110.242.68.3): 56 data bytes
64 bytes from 110.242.68.3: seq=0 ttl=52 time=17.069 ms
64 bytes from 110.242.68.3: seq=1 ttl=52 time=17.331 ms

安裝dashboar

先手動下載相關鏡像,並上傳到本地harbor

root@k8s-deploy:~/k8s# docker pull kubernetesui/dashboard:v2.3.1
root@k8s-deploy:~/k8s# docker tag kubernetesui/dashboard:v2.3.1 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1
root@k8s-deploy:~/k8s# docker push 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1


root@k8s-master1:~# docker pull kubernetesui/metrics-scraper:v1.0.6
root@k8s-master1:~# docker tag kubernetesui/metrics-scraper:v1.0.6 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6
root@k8s-master1:~# docker push 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6

修改配置文件

# 如果下載有問題,可以使用瀏覽器打開文件,復制內容到服務器上的文件中。
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
mv recommended.yaml dashboard-2.3.1.yaml


vim dashboard-2.3.1.yaml

 40   type: NodePort
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 30002
192           image: 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1
277           image: 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6

安裝dashboard

root@k8-deploy:~# kubectl apply -f dashboard-v2.3.1.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

root@k8-deploy:~# kubectl get pod -A -o wide |grep dash
kubernetes-dashboard   dashboard-metrics-scraper-7459c89f54-g27ls   1/1     Running   0          33s     10.100.224.69    192.168.2.18   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-dfcb6dcdb-2dzxd         1/1     Running   0          33s     10.100.224.68    192.168.2.18   <none>           <none>

為dashboard webui登錄准備token准備賬號及權限的配置文件

root@k8-deploy:~# cat admin-user.yml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

創建賬號

root@k8-deploy:~# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

獲取賬號的token

root@k8-deploy:~# kubectl get secrets -A |grep admin 
kubernetes-dashboard   admin-user-token-7zrzk                           kubernetes.io/service-account-token   3      2m11s
root@k8-deploy:~#  kubectl describe secrets admin-user-token-7zrzk -n kubernetes-dashboard     
Name:         admin-user-token-7zrzk
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: ebcd7707-19bf-45e5-96d4-d49c5fa4ac93

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjhtdjZFNVdBVlZZWEJyODE0bDdZYy1hb1BJNUxldVNzWG9haVZIQXZraDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTd6cnprIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlYmNkNzcwNy0xOWJmLTQ1ZTUtOTZkNC1kNDljNWZhNGFjOTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.qjhBK2i-d-IRktF-w4q5xLgmyVPe2qCiMHJ09nBPJLnUZyYUSQdhYogSi7Gdc3M5NoRspoLRX-fxwYEsYxK3bZYoePk8zEbx8_WS87H9KncjUCRLrxGXjwiVkbVg4DJc1ewziRaEFUKIPCneuVksHDAEu3CBkqYMCYROIj7MLHIJKT1EzrzVG5IWoov0t6exNJKFkpxRovF1WvpDU2qXbFgkCjf_alm7PdoxeU-ACwqjVc_-5eXqOwKPh1MKHQT2Z7ZzvrKZhSlyDWXLAryPw2klpjZezxo5-Q0JFBtCqCRSl2pLvnLPBN6NfdT32Ej139_cXrqgFG5h4k8FvpGgUg
root@k8-deploy:~# 

使用token登錄dashboard web頁面
image

3.3.4 增加master節點

查看現有集群節點狀態

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME           STATUS                     ROLES    AGE   VERSION
192.168.2.11   Ready,SchedulingDisabled   master   8d    v1.21.0
192.168.2.12   Ready,SchedulingDisabled   master   8d    v1.21.0
192.168.2.17   Ready                      node     8d    v1.21.0
192.168.2.18   Ready                      node     8d    v1.21.0

增加master節點

root@k8-deploy:/etc/kubeasz# ./ezctl add-master k8s-fx01 192.168.2.13

再次查看集群節點狀態,驗證是否添加成功

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME           STATUS                     ROLES    AGE    VERSION
192.168.2.11   Ready,SchedulingDisabled   master   8d     v1.21.0
192.168.2.12   Ready,SchedulingDisabled   master   8d     v1.21.0
192.168.2.13   Ready,SchedulingDisabled   master   6m4s   v1.21.0
192.168.2.17   Ready                      node     8d     v1.21.0
192.168.2.18   Ready                      node     8d     v1.21.0

3.3.5 增加node節點

增加node節點命令

root@k8-deploy:/etc/kubeasz# ./ezctl add-node k8s-fx01 192.168.2.19

查看集群節點狀態,驗證是否添加成功

root@k8-deploy:/etc/kubeasz# kubectl get node                      
NAME           STATUS                     ROLES    AGE     VERSION
192.168.2.11   Ready,SchedulingDisabled   master   8d      v1.21.0
192.168.2.12   Ready,SchedulingDisabled   master   8d      v1.21.0
192.168.2.13   Ready,SchedulingDisabled   master   20m     v1.21.0
192.168.2.17   Ready                      node     8d      v1.21.0
192.168.2.18   Ready                      node     8d      v1.21.0
192.168.2.19   Ready                      node     4m21s   v1.21.0


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM