1.kubernetes的五個組件
master節點的三個組件
kube-apiserver
整個集群的唯一入口,並提供認證、授權、訪問控制、API注冊和發現等機制。
kube-controller-manager (控制器管理器)
負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等。保證資源到達期望值。
kube-scheduler
調度器
經過策略調度POD到合適的節點上面運行。分別有預選策略和優選策略。
node節點的兩個組件
kubelet
在集群節點上運行的代理,kubelet會通過各種機制來確保容器處於運行狀態且健康。kubelet不會管理不是由kubernetes創建的容器。kubelet接收POD的期望狀態(副本數、鏡像、網絡等),並調用容器運行環境來實現預期狀態。
kubelet會定時匯報節點的狀態給apiserver,作為scheduler調度的基礎。kubelet會對鏡像和容器進行清理,避免不必要的文件資源占用。
kube-proxy
kube-proxy是集群中節點上運行的網絡代理,是實現service資源功能組件之一。kube-proxy建立了POD網絡和集群網絡之間的關系。不同node上的service流量轉發規則會通過kube-proxy來調用apiserver訪問etcd進行規則更新。
service流量調度方式有三種方式:userspace(廢棄,性能很差)、iptables(性能差,復雜,即將廢棄)、ipvs(性能好,轉發方式清晰)。
2.集群架構
角色 | ip(vip 10.252.4.10) | 組件 |
km1 | 10.252.4.11 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,nginx,keepalived |
km2 | 10.252.4.12 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,nginx,keepalived |
kn1 | 10.252.4.13 | kubelet,kube-proxy,docker etcd |
kn2 | 10.252.4.14 | kubelet,kube-proxy,docker etcd |
dev | 10.252.4.2 | nfs\dns\docker |
3. 搭建集群
3.1 機器基本配置
以下配置在5台機器上面操作
3.1.1 修改主機名
修改主機名稱:km1、km2\node1、node2
3.1.2 配置hosts文件
修改機器的/etc/hosts文件
cat >> /etc/hosts << EOF 10.252.4.11 km1 10.252.4.12 km2 10.252.4.14 kn1 10.252.4.15 kn2 EOF
3.1.3 關閉防火牆和selinux
systemctl stop firewalld setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
3.1.4 關閉交換分區
swapoff -a
永久關閉,修改/etc/fstab,注釋掉swap一行
3.1.5 時間同步
yum install -y chrony systemctl start chronyd systemctl enable chronyd chronyc sources
3.1.6 修改內核參數
cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
3.1.7 加載ipvs模塊
modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 lsmod | grep ip_vs lsmod | grep nf_conntrack_ipv4 yum install -y ipvsadm
3.2 配置工作目錄
每台機器都需要配置證書文件、組件的配置文件、組件的服務啟動文件,現專門選擇 km1 來統一生成這些文件,然后再分發到其他機器。以下操作在 km1 上進行
[root@km1 ~]# mkdir -p /data/work 注:該目錄為配置文件和證書文件生成目錄,后面的所有文件生成相關操作均在此目錄下進行 [root@km1 ~]# ssh-keygen -t rsa -b 2048 [root@km1 ~]# ssh-copy-id -i .ssh/id_rsa.pub km2
將秘鑰分發到另外五台機器,讓 master1 可以免密碼登錄其他機器
3.3 搭建etcd集群
3.3.1 配置etcd工作目錄
[root@km1 ~]# mkdir -p /etc/etcd # 配置文件存放目錄 [root@km1 ~]# mkdir -p /etc/etcd/ssl # 證書文件存放目錄
3.3.2 創建etcd證書
工具下載 [root@km1 ~]# cd /data/work/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo chmod +x /usr/bin/cfssl*
配置ca請求文件
[root@km1 work]# cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "beijing", "O": "k8s", "OU": "system" } ], "ca": { "expiry": "175200h" } }
注:
CN:Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法;
O:Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group)
創建ca證書
[root@km1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
配置ca證書策略
[root@km1 work]# cat ca-config.json { "signing": { "default": { "expiry": "175200h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "175200h" } } } }
配置etcd請求csr文件
[root@km1 work]# vim etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12", "10.252.4.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" }] }
生成證書
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd [root@km1 work]# ls etcd*.pem
3.3.3 部署etcd集群
[root@km1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz [root@km1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz [root@km1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/ [root@km1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* kn2:/usr/local/bin/
創建配置文件
[root@km1 work]# vim etcd.conf #[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.252.4.11:2380" ETCD_LISTEN_CLIENT_URLS="https://10.252.4.11:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.252.4.11:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.252.4.11:2379" ETCD_INITIAL_CLUSTER="etcd1=https://10.252.4.11:2380,etcd2=https://10.252.4.12:2380,etcd3=https://10.252.4.13:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
注:
ETCD_NAME:節點名稱,集群中唯一
ETCD_DATA_DIR:數據目錄
ETCD_LISTEN_PEER_URLS:集群通信監聽地址
ETCD_LISTEN_CLIENT_URLS:客戶端訪問監聽地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址
ETCD_INITIAL_CLUSTER:集群節點地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的當前狀態,new是新集群,existing表示加入已有集群
創建啟動服務文件
[root@km1 work]# vim etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-client-cert-auth \ --client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步相關文件到各個節點
[root@km1 work]# cp ca*.pem etcd*.pem /etc/etcd/ssl/ [root@km1 work]# cp etcd.conf /etc/etcd/ [root@km1 work]# cp etcd.service /usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem etcd*.pem km2:/etc/etcd/ssl/ [root@km1 work]# scp etcd.conf km2:/etc/etcd/ [root@km1 work]# scp etcd.service km2:/usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem etcd*.pem kn:/etc/etcd/ssl/ [root@km1 work]# scp etcd.conf kn:/etc/etcd/ [root@km1 work]# scp etcd.service kn:/usr/lib/systemd/system/
注:km2和kn分別修改配置文件中etcd名字和ip,並創建目錄 /var/lib/etcd/default.etcd
啟動etcd集群<km1、km2和kn分別執行以下命令>
[root@km1 work]# mkdir -p /var/lib/etcd/default.etcd [root@km1 work]# systemctl daemon-reload [root@km1 work]# systemctl start etcd.service 注:同時啟動三個節點 [root@km1 work]# systemctl status etcd.service [root@km1 work]# systemctl enable etcd.service
查看集群狀態
[root@km1 work]# etcdctl member list
3.4 kubernetes組件部署
3.4.1 下載安裝包
[root@km1 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz [root@km1 work]# tar -xf kubernetes-server-linux-amd64.tar [root@km1 work]# cd kubernetes/server/bin/ [root@km1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /usr/local/bin/ [root@km1 bin]# scp kube-apiserver kube-controller-manager kube-scheduler km2:/usr/local/bin/ [root@km1 bin]# scp kubelet kube-proxy kn:/usr/local/bin/ [root@km1 bin]# cd /data/work/
3.4.2 創建工作目錄<km1\km2>
[root@km1 work]# mkdir -p /etc/kubernetes/ # kubernetes組件配置文件存放目錄 [root@km1 work]# mkdir -p /etc/kubernetes/ssl # kubernetes組件證書文件存放目錄 [root@km1 work]# mkdir /var/log/kubernetes # kubernetes組件日志文件存放目錄
3.4.3 部署api-server
創建csr請求文件
[root@km1 work]# vim kube-apiserver-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12", "10.252.4.13", "10.252.4.10", "10.255.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" } ] }
注:
如果 hosts 字段不為空則需要指定授權使用該證書的 IP 或域名列表。
由於該證書后續被 kubernetes master 集群使用,需要將master節點的IP都填上,同時還需要填寫 service 網絡的首個IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 10.255.0.1)
生成證書和token文件
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver [root@km1 work]# cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
創建配置文件
[root@km1 work]# vim kube-apiserver.conf KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=10.252.4.11 \ --secure-port=6443 \ --advertise-address=10.252.4.11 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.255.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ # 1.20以上版本必須有此參數 --service-account-issuer=https://kubernetes.default.svc.cluster.local \ # 1.20以上版本必須有此參數 --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://10.252.4.11:2379,https://10.252.4.12:2379,https://10.252.4.13:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=2 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=4"
注:
--logtostderr:啟用日志
--v:日志等級
--log-dir:日志目錄
--etcd-servers:etcd集群地址
--bind-address:監聽地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:啟用授權
--service-cluster-ip-range:Service虛擬IP地址段
--enable-admission-plugins:准入控制模塊
--authorization-mode:認證授權,啟用RBAC授權和節點自管理
--enable-bootstrap-token-auth:啟用TLS bootstrap機制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport類型默認分配端口范圍
--kubelet-client-xxx:apiserver訪問kubelet客戶端證書
--tls-xxx-file:apiserver https證書
--etcd-xxxfile:連接Etcd集群證書
--audit-log-xxx:審計日志
創建服務啟動文件
[root@km1 work]# vim kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步相關文件到各個節點
[root@km1 work]# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl/ [root@km1 work]# cp token.csv kube-apiserver.conf /etc/kubernetes/ [root@km1 work]# cp kube-apiserver.service /usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem kube-apiserver*.pem km2:/etc/kubernetes/ssl/ [root@km1 work]# scp token.csv kube-apiserver.conf km2:/etc/kubernetes/ [root@km1 work]# scp kube-apiserver.service km2:/usr/lib/systemd/system/
注:km1\km2配置文件的IP地址修改為實際的本機IP
啟動服務
[root@km1 work]# systemctl daemon-reload [root@km1 work]# systemctl start kube-apiserver [root@km1 work]# systemctl status kube-apiserver [root@km1 work]# systemctl enable kube-apiserver [root@km1 work]# netstat -nltup|grep kube-api
3.4.4 部署四層反向代理
分別在km節點安裝NGINX和keepalived
[root@km1 work]# yum install nginx keepalived -y [root@km1 work]# vi /etc/nginx/nginx.conf stream { upstream kube-apiserver { server 10.252.4.11:6443 max_fails=3 fail_timeout=30s; server 10.252.4.12:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; } } [root@km1 work]# nginx -t
檢查端口腳本 [root@km1 work]#vi /etc/keepalived/check_port.sh #!/bin/bash CHK_PORT=$1 if [ -n "$CHK_PORT" ];then PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l` if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fi else echo "Check Port Cant Be Empty!" fi [root@km1 work]# chmod +x /etc/keepalived/check_port.sh ########## 配置文件 keepalived 主 [root@km1 work]# vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.252.4.11 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 10.252.4.11 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.252.4.10 } } keepalived 從: ! Configuration File for keepalived global_defs { router_id 10.252.4.12 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 251 mcast_src_ip 10.252.4.12 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.252.4.10 } } nopreempt:非搶占式 啟動代理並檢查 systemctl start nginx keepalived systemctl enable nginx keepalived netstat -lntup|grep nginx ip add
3.4.5 部署kubectl
創建csr請求文件
[root@km1 work]# vim admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "system" } ] }
說明:
后續 kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行授權;
kube-apiserver 預定義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 的所有 API的權限;O指定該證書的 Group 為 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書用戶組為經過預授權的
system:masters,所以被授予訪問所有 API 的權限;
注:
這個admin 證書,是將來生成管理員用的kube config 配置文件用的,現在我們一般建議使用RBAC 來對kubernetes 進行角色權限控制, kubernetes 將證書中的CN 字段 作為User, O 字段作為 Group;"O": "system:masters", 必須是system:masters,否則后面kubectl create clusterrolebinding報錯。
生成證書
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin [root@km1 work]# cp admin*.pem /etc/kubernetes/ssl/
創建kubeconfig配置文件
kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書
設置集群參數 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube.config 設置客戶端認證參數 [root@km1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config 設置上下文參數 [root@km1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config 設置默認上下文 [root@km1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config [root@km1 work]# mkdir ~/.kube [root@km1 work]# cp kube.config ~/.kube/config 授權kubernetes證書訪問kubelet api權限 [root@km1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
查看集群組件狀態
上面步驟完成后,kubectl就可以與kube-apiserver通信了
[root@km1 work]# kubectl cluster-info [root@km1 work]# kubectl get componentstatuses [root@km1 work]# kubectl get all --all-namespaces
同步kubectl配置文件到其他節點
[root@km1 work]# scp -rp /root/.kube/ km2:/root/
配置kubectl子命令補全
[root@km1 work]# yum install -y bash-completion [root@km1 work]# source /usr/share/bash-completion/bash_completion [root@km1 work]# source <(kubectl completion bash) [root@km1 work]# kubectl completion bash > ~/.kube/completion.bash.inc [root@km1 work]# source '/root/.kube/completion.bash.inc' [root@km1 work]# source $HOME/.bash_profile
3.4.6 部署kube-controller-manager
創建csr請求文件
[root@km1 work]# vim kube-controller-manager-csr.json { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12" ], "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "system" } ] }
注:
hosts 列表包含所有 kube-controller-manager 節點 IP;
CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限
生成證書
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager [root@km1 work]# ls kube-controller-manager*.pem
創建kube-controller-manager的kubeconfig
設置集群參數 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-controller-manager.kubeconfig 設置客戶端認證參數 [root@km1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig 設置上下文參數 [root@km1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig 設置默認上下文 [root@km1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
創建配置文件
[root@km1 work]# vim kube-controller-manager.conf KUBE_CONTROLLER_MANAGER_OPTS="--bind-address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.255.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --allocate-node-cidrs=true \ --cluster-cidr=10.0.0.0/16 \ --experimental-cluster-signing-duration=175200h \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2"
創建啟動文件
[root@km1 work]# vim kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
同步相關文件到各個節點
[root@km1 work]# cp kube-controller-manager.pem /etc/kubernetes/ssl/
[root@km1 work]# cp kube-controller-manager.conf kube-controller-manager.kubeconfig /etc/kubernetes/
[root@km1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
3.4.7 部署kube-scheduler
創建csr請求文件
[root@km1 work]# vim kube-scheduler-csr.json { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "system" } ] }
注:
hosts 列表包含所有 kube-scheduler 節點 IP;
CN 為 system:kube-scheduler、O 為 system:kube-scheduler,kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限。
生成證書
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler [root@km1 work]# ls kube-scheduler*.pem
創建kube-scheduler的kubeconfig
設置集群參數 [root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-scheduler.kubeconfig 設置客戶端認證參數 [root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig 設置上下文參數 [root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig 設置默認上下文 [root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
創建配置文件
[root@km1 work]# vim kube-scheduler.conf KUBE_SCHEDULER_OPTS="--address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig --leader-elect=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
創建服務啟動文件
[root@km1 work]# vim kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
同步相關文件到各個節點
[root@km1 work]# cp kube-scheduler.service /usr/lib/systemd/system/ [root@km1 work]# cp kube-scheduler.conf kube-scheduler.kubeconfig /etc/kubernetes/ [root@km1 work]# cp kube-scheduler.pem /etc/kubernetes/ssl/ [root@km1 work]# scp kube-scheduler.pem km2:/etc/kubernetes/ssl/ [root@km1 work]# scp kube-scheduler.kubeconfig kube-scheduler.conf km2:/etc/kubernetes/ [root@km1 work]# scp kube-scheduler.service km2:/usr/lib/systemd/system/
啟動服務
[root@km2 ~]# systemctl daemon-reload [root@km2 ~]# systemctl start kube-scheduler [root@km2 ~]# systemctl status kube-scheduler [root@km2 ~]# systemctl enable kube-scheduler
3.4.8 部署kubelet 以下操作在master1上操作
創建kubelet-bootstrap.kubeconfig [root@km1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) 設置集群參數 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kubelet-bootstrap.kubeconfig 設置客戶端認證參數 [root@km1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig 設置上下文參數 [root@km1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig 設置默認上下文 [root@km1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig 創建角色綁定 [root@km1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
創建配置文件
[root@km1 work]# vim kubelet.json { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "10.252.4.11", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "systemd", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] }
創建啟動文件
[root@km1 work]# vim kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \ --cert-dir=/etc/kubernetes/ssl \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet.json \ --network-plugin=cni \ --pod-infra-container-image=10.252.4.11:5000/maxzhu/pause:v1 \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
注:
–hostname-override:顯示名稱,集群中唯一
–network-plugin:啟用CNI
–kubeconfig:空路徑,會自動生成,后面用於連接apiserver
–bootstrap-kubeconfig:首次啟動向apiserver申請證書
–config:配置參數文件
–cert-dir:kubelet證書生成目錄
–pod-infra-container-image:管理Pod網絡容器的鏡像
同步相關文件到各個節點
[root@km1 work]# scp kubelet-bootstrap.kubeconfig kubelet.json kn:/etc/kubernetes/ [root@km1 work]# scp kubelet.service kn:/usr/lib/systemd/system/ [root@km1 work]# scp ca.pem kn:/etc/kubernetes/ssl/
注:kubelete.json配置文件address改為各個節點的ip地址
啟動服務
各個work節點上操作
[root@kn1 ~]# mkdir /var/lib/kubelet [root@kn1 ~]# mkdir /var/log/kubernetes [root@kn1 ~]# systemctl daemon-reload [root@kn1 ~]# systemctl enable kubelet [root@kn1 ~]# systemctl start kubelet [root@kn1 ~]# systemctl status kubelet
確認kubelet服務啟動成功后,接着到master上Approve一下bootstrap請求。執行如下命令可以看到三個worker節點分別發送了三個 CSR 請求:
[root@km1 work]# kubectl get csr [root@km1 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ [root@km1 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk [root@km1 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50 [root@km1 work]# kubectl get csr [root@km1 work]# kubectl get nodes
3.4.9 部署kube-proxy
創建csr請求文件
[root@km1 work]# vim kube-proxy-csr.json { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" } ] }
生成證書
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@km1 work]# ls kube-proxy*.pem
創建kubeconfig文件
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
創建kube-proxy配置文件
[root@km1 work]# vim kube-proxy.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 10.252.4.13 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 192.168.0.0/16 healthzBindAddress: 10.252.4.13:10256 kind: KubeProxyConfiguration metricsBindAddress: 10.252.4.13:10249 mode: "ipvs"
創建服務啟動文件
[root@km1 work]# vim kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步文件到各個node節點
[root@km1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml kn:/etc/kubernetes/ [root@km1 work]# scp kube-proxy.service kn:/usr/lib/systemd/system/
注:配置文件kube-proxy.yaml中address修改為各節點的實際IP
啟動服務
[root@kn ~]# mkdir -p /var/lib/kube-proxy [root@kn ~]# systemctl daemon-reload [root@kn ~]# systemctl enable kube-proxy [root@kn ~]# systemctl restart kube-proxy [root@kn ~]# systemctl status kube-proxy
3.4.10 配置網絡組件
[root@km1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml [root@km1 work]# kubectl apply -f calico.yaml
此時再來查看各個節點,均為Ready狀態 [root@km1 work]# kubectl get pods -A [root@km1 work]# kubectl get nodes