Centos7.6部署k8s v1.16.4高可用集群(主備模式)


一、部署環境

主機列表:

主機名 Centos版本 ip docker version flannel version Keepalived version 主機配置 備注
master01 7.6.1810 172.27.34.3 18.09.9 v0.11.0 v1.3.5 4C4G control plane
master02 7.6.1810 172.27.34.4 18.09.9 v0.11.0 v1.3.5 4C4G control plane
master03 7.6.1810 172.27.34.5 18.09.9 v0.11.0 v1.3.5 4C4G control plane
work01 7.6.1810 172.27.34.93 18.09.9 / / 4C4G worker nodes
work02 7.6.1810 172.27.34.94 18.09.9 / / 4C4G worker nodes
work03 7.6.1810 172.27.34.95 18.09.9 / / 4C4G worker nodes
VIP 7.6.1810 172.27.34.130 18.09.9 v0.11.0 v1.3.5 4C4G 在control plane上浮動
client 7.6.1810 172.27.34.234 / / / 4C4G client

共有7台服務器,3台control plane,3台work,1台client。

k8s 版本:

主機名 kubelet version kubeadm version kubectl version 備注
master01 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
master02 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
master03 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work01 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work02 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work03 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
client / / v1.16.4 client

二、高可用架構

本文采用kubeadm方式搭建高可用k8s集群,k8s集群的高可用實際是k8s各核心組件的高可用,這里使用主備模式,架構如下:

image-20200308233202050

主備模式高可用架構說明:

核心組件 高可用模式 高可用實現方式
apiserver 主備 keepalived
controller-manager 主備 leader election
scheduler 主備 leader election
etcd 集群 kubeadm
  • apiserver 通過keepalived實現高可用,當某個節點故障時觸發keepalived vip 轉移;
  • controller-manager k8s內部通過選舉方式產生領導者(由--leader-elect 選型控制,默認為true),同一時刻集群內只有一個controller-manager組件運行;
  • scheduler k8s內部通過選舉方式產生領導者(由--leader-elect 選型控制,默認為true),同一時刻集群內只有一個scheduler組件運行;
  • etcd 通過運行kubeadm方式自動創建集群來實現高可用,部署的節點數為奇數,3節點方式最多容忍一台機器宕機。

三、安裝准備工作

control plane和work節點都執行本部分操作。

Centos7.6安裝詳見:Centos7.6操作系統安裝及優化全紀錄

安裝Centos時已經禁用了防火牆和selinux並設置了阿里源。

1. 配置主機名

1.1 修改主機名

[root@centos7 ~]# hostnamectl set-hostname master01 [root@centos7 ~]# more /etc/hostname master01 

退出重新登陸即可顯示新設置的主機名master01

1.2 修改hosts文件

[root@master01 ~]# cat >> /etc/hosts << EOF 172.27.34.3 master01 172.27.34.4 master02 172.27.34.5 master03 172.27.34.93 work01 172.27.34.94 work02 172.27.34.95 work03 EOF 

image-20200308234056556

2. 驗證mac地址uuid

[root@master01 ~]# cat /sys/class/net/ens160/address [root@master01 ~]# cat /sys/class/dmi/id/product_uuid 

image-20200308234223429
保證各節點mac和uuid唯一

3. 禁用swap

3.1 臨時禁用

[root@master01 ~]# swapoff -a 

3.2 永久禁用

若需要重啟后也生效,在禁用swap后還需修改配置文件/etc/fstab,注釋swap

[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab 

image-20200308234327601

4. 內核參數修改

本文的k8s網絡使用flannel,該網絡需要設置內核參數bridge-nf-call-iptables=1,修改這個參數需要系統有br_netfilter模塊。

4.1 br_netfilter模塊加載

查看br_netfilter模塊:

[root@master01 ~]# lsmod |grep br_netfilter 

如果系統沒有br_netfilter模塊則執行下面的新增命令,如有則忽略。

臨時新增br_netfilter模塊:

[root@master01 ~]# modprobe br_netfilter 

該方式重啟后會失效

永久新增br_netfilter模塊:

[root@master01 ~]# cat > /etc/rc.sysinit << EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF [root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF modprobe br_netfilter EOF [root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules 

image-20200308234424412

4.2 內核參數臨時修改

[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-ip6tables = 1 

4.3 內核參數永久修改

[root@master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF [root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 

image-20200308234508098

5. 設置kubernetes源

5.1 新增kubernetes源

[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 
  • [] 中括號中的是repository id,唯一,用來標識不同倉庫
  • name 倉庫名稱,自定義
  • baseurl 倉庫地址
  • enable 是否啟用該倉庫,默認為1表示啟用
  • gpgcheck 是否驗證從該倉庫獲得程序包的合法性,1為驗證
  • repo_gpgcheck 是否驗證元數據的合法性 元數據就是程序包列表,1為驗證
  • gpgkey=URL 數字簽名的公鑰文件所在位置,如果gpgcheck值為1,此處就需要指定gpgkey文件的位置,如果gpgcheck值為0就不需要此項了

5.2 更新緩存

[root@master01 ~]# yum clean all [root@master01 ~]# yum -y makecache 

6. 免密登錄

配置master01到master02、master03免密登錄,本步驟只在master01上執行。

6.1 創建秘鑰

[root@master01 ~]# ssh-keygen -t rsa 

image-20200308234613464

6.2 將秘鑰同步至master02/master03

[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4 [root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5 

image-20200308234710352

6.3 免密登陸測試

[root@master01 ~]# ssh 172.27.34.4 [root@master01 ~]# ssh master03 

image-20200308234750743

master01可以直接登錄master02和master03,不需要輸入密碼。

四、Docker安裝

control plane和work節點都執行本部分操作。

1. 安裝依賴包

[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 

image-20200308234858636

2. 設置Docker源

[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 

image-20200308234944098

3. 安裝Docker CE

3.1 docker安裝版本查看

[root@master01 ~]# yum list docker-ce --showduplicates | sort -r 

image-20200308235024772

3.2 安裝docker

[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y 

image-20200308235110109
指定安裝的docker版本為18.09.9

4. 啟動Docker

[root@master01 ~]# systemctl start docker [root@master01 ~]# systemctl enable docker 

image-20200308235209620

5. 命令補全

5.1 安裝bash-completion

[root@master01 ~]# yum -y install bash-completion 

5.2 加載bash-completion

[root@master01 ~]# source /etc/profile.d/bash_completion.sh 

image-20200308235253548

6. 鏡像加速

由於Docker Hub的服務器在國外,下載鏡像會比較慢,可以配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里雲加速器、DaoCloud 加速器,本文以阿里加速器配置為例。

6.1 登陸阿里雲容器模塊

登陸地址為:https://cr.console.aliyun.com ,未注冊的可以先注冊阿里雲賬戶

image-20200308235440586

6.2 配置鏡像加速器

配置daemon.json文件

[root@master01 ~]# mkdir -p /etc/docker [root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"] } EOF 

重啟服務

[root@master01 ~]# systemctl daemon-reload [root@master01 ~]# systemctl restart docker 

image-20200308235538222

加速器配置完成

7. 驗證

[root@master01 ~]# docker --version [root@master01 ~]# docker run hello-world 

image-20200308235623691

通過查詢docker版本和運行容器hello-world來驗證docker是否安裝成功。

8. 修改Cgroup Driver

8.1 修改daemon.json

修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’

[root@master01 ~]# more /etc/docker/daemon.json { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } 

8.2 重新加載docker

[root@master01 ~]# systemctl daemon-reload [root@master01 ~]# systemctl restart docker 

修改cgroupdriver是為了消除告警:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

五、keepalived安裝

control plane節點都執行本部分操作。

1. 安裝keepalived

[root@master01 ~]# yum -y install keepalived 

image-20200308235704759

2. keepalived配置

master01上keepalived配置:

[root@master01 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master01 } vrrp_instance VI_1 { state MASTER interface ens160 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 } } 

master02上keepalived配置:

[root@master02 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master02 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 50 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 } } 

master03上keepalived配置:

[root@master03 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master03 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 50 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 } 

3. 啟動keepalived

所有control plane啟動keepalived服務並設置開機啟動

[root@master01 ~]# service keepalived start [root@master01 ~]# systemctl enable keepalived 

image-20200308235744870

4. VIP查看

[root@master01 ~]# ip a 

image-20200308235827632

vip在master01上

六、k8s安裝

control plane和work節點都執行本部分操作。

1. 版本查看

[root@master01 ~]# yum list kubelet --showduplicates | sort -r 

image-20200308235926531

本文安裝的kubelet版本是1.16.4,該版本支持的docker版本為1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。

2. 安裝kubelet、kubeadm和kubectl

2.1 安裝三個包

[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4 

image-20200309000000283

2.2 安裝包說明

  • kubelet 運行在集群所有節點上,用於啟動Pod和容器等對象的工具
  • kubeadm 用於初始化集群,啟動集群的命令工具
  • kubectl 用於和集群通信的命令行,通過kubectl可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件

2.3 啟動kubelet

啟動kubelet並設置開機啟動

[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet 

2.4 kubectl命令補全

[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile [root@master01 ~]# source .bash_profile 

3. 下載鏡像

3.1 鏡像下載的腳本

Kubernetes幾乎所有的安裝組件和Docker鏡像都放在goolge自己的網站上,直接訪問可能會有網絡問題,這里的解決辦法是從阿里雲鏡像倉庫下載鏡像,拉取到本地以后改回默認的鏡像tag。本文通過運行image.sh腳本方式拉取鏡像。

[root@master01 ~]# more image.sh #!/bin/bash url=registry.cn-hangzhou.aliyuncs.com/loong576 version=v1.16.4 images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`) for imagename in ${images[@]} ; do docker pull $url/$imagename docker tag $url/$imagename k8s.gcr.io/$imagename docker rmi -f $url/$imagename done 

url為阿里雲鏡像倉庫地址,version為安裝的kubernetes版本。

3.2 下載鏡像

運行腳本image.sh,下載指定版本的鏡像

[root@master01 ~]# ./image.sh [root@master01 ~]# docker images 

image-20200309000058989

七、初始化Master

master01節點執行本部分操作。

1. kubeadm.conf

[root@master01 ~]# more kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.16.4 apiServer: certSANs: #填寫所有kube-apiserver節點的hostname、IP、VIP - master01 - master02 - master03 - node01 - node02 - node03 - 172.27.34.3 - 172.27.34.4 - 172.27.34.5 - 172.27.34.93 - 172.27.34.94 - 172.27.34.95 - 172.27.34.130 controlPlaneEndpoint: "172.27.34.130:6443" networking: podSubnet: "10.244.0.0/16" 

kubeadm.conf為初始化的配置文件

2. master初始化

[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml 

image-20200309000140786

記錄kubeadm join的輸出,后面需要這個命令將work節點和其他control plane節點加入集群中。

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root: kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 

初始化失敗:

如果初始化失敗,可執行kubeadm reset后重新初始化

[root@master01 ~]# kubeadm reset [root@master01 ~]# rm -rf $HOME/.kube/config 

image-20200309000219191

3. 加載環境變量

[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master01 ~]# source .bash_profile 

本文所有操作都在root用戶下執行,若為非root用戶,則執行如下操作:

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config 

4. 安裝flannel網絡

在master01上新建flannel網絡

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 

image-20200309000309401

由於網絡原因,可能會安裝失敗,可以在文末直接下載kube-flannel.yml文件,然后再執行apply

八、control plane節點加入集群

1. 證書分發

master01分發證書:

在master01上運行腳本cert-main-master.sh,將證書分發至master02和master03

[root@master01 ~]# ll|grep cert-main-master.sh -rwxr--r-- 1 root root 638 1月 2 15:23 cert-main-master.sh [root@master01 ~]# more cert-main-master.sh USER=root # customizable CONTROL_PLANE_IPS="172.27.34.4 172.27.34.5" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt # Quote this line if you are using external etcd scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key done 

image-20200309000346676

master02移動證書至指定目錄:

在master02上運行腳本cert-other-master.sh,將證書移至指定目錄

[root@master02 ~]# pwd /root [root@master02 ~]# ll|grep cert-other-master.sh -rwxr--r-- 1 root root 484 1月 2 15:29 cert-other-master.sh [root@master02 ~]# more cert-other-master.sh USER=root # customizable mkdir -p /etc/kubernetes/pki/etcd mv /${USER}/ca.crt /etc/kubernetes/pki/ mv /${USER}/ca.key /etc/kubernetes/pki/ mv /${USER}/sa.pub /etc/kubernetes/pki/ mv /${USER}/sa.key /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt # Quote this line if you are using external etcd mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key [root@master02 ~]# ./cert-other-master.sh 

master03移動證書至指定目錄:

在master03上也運行腳本cert-other-master.sh

[root@master03 ~]# pwd /root [root@master03 ~]# ll|grep cert-other-master.sh -rwxr--r-- 1 root root 484 1月 2 15:31 cert-other-master.sh [root@master03 ~]# ./cert-other-master.sh 

2. master02加入集群

kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
    --control-plane

運行初始化master生成的control plane節點加入集群的命令

image-20200309000440211

3. master03加入集群

kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
    --control-plane

image-20200309000524986

4. 加載環境變量

master02和master03加載環境變量

[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master02 ~]# source .bash_profile 
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master03 ~]# source .bash_profile 

該步操作是為了在master02和master03上也能執行kubectl命令。

5. 集群節點查看

[root@master01 ~]# kubectl get nodes [root@master01 ~]# kubectl get po -o wide -n kube-system 

image-20200309000557213
所有control plane節點處於ready狀態,所有的系統組件也正常。

九、work節點加入集群

1. work01加入集群

kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 

運行初始化master生成的work節點加入集群的命令

image-20200309000746059

2. work02加入集群

image-20200309000823234

3. work03加入集群

image-20200309000900617

4. 集群節點查看

[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 44m v1.16.4 master02 Ready master 33m v1.16.4 master03 Ready master 23m v1.16.4 work01 Ready <none> 11m v1.16.4 work02 Ready <none> 7m50s v1.16.4 work03 Ready <none> 3m4s v1.16.4 

image-20200309000948889

十、client配置

1. 設置kubernetes源

1.1 新增kubernetes源

[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 

image-20200309001014941

1.2 更新緩存

[root@client ~]# yum clean all [root@client ~]# yum -y makecache 

2. 安裝kubectl

[root@client ~]# yum install -y kubectl-1.16.4 

image-20200309001105726

安裝版本與集群版本保持一致

3. 命令補全

3.1 安裝bash-completion

[root@client ~]# yum -y install bash-completion 

3.2 加載bash-completion

[root@client ~]# source /etc/profile.d/bash_completion.sh 

image-20200309001144974

3.3 拷貝admin.conf

[root@client ~]# mkdir -p /etc/kubernetes [root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@client ~]# source .bash_profile 

image-20200309001214217

3.4 加載環境變量

[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile [root@master01 ~]# source .bash_profile 

4. kubectl測試

[root@client ~]# kubectl get nodes [root@client ~]# kubectl get cs [root@client ~]# kubectl get po -o wide -n kube-system 

image-20200309001250348

十一、Dashboard搭建

本節內容都在client端完成

1. 下載yaml

[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml 

如果連接超時,可以多試幾次。recommended.yaml已上傳,也可以在文末下載。

2. 配置yaml

2.1 修改鏡像地址

[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml 

由於默認的鏡像倉庫網絡訪問不通,故改成阿里鏡像

2.2 外網訪問

[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml 

配置NodePort,外部通過https://NodeIp:NodePort 訪問Dashboard,此時端口為30001

2.3 新增管理員帳號

[root@client ~]# cat >> recommended.yaml << EOF --- # ------------------- dashboard-admin ------------------- # apiVersion: v1 kind: ServiceAccount metadata: name: dashboard-admin namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: dashboard-admin subjects: - kind: ServiceAccount name: dashboard-admin namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin 

image-20200309001333830

創建超級管理員的賬號用於登錄Dashboard

3. 部署訪問

3.1 部署Dashboard

[root@client ~]# kubectl apply -f recommended.yaml 

image-20200309001410669

3.2 狀態查看

[root@client ~]# kubectl get all -n kubernetes-dashboard 

image-20200309001442265

3.3 令牌查看

[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin 

image-20200309001513147
令牌為:

eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh1NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug

3.4 訪問

請使用火狐瀏覽器訪問:https://VIP:30001
image-20200309001555462

image-20200309001628035

接受風險
image-20200309001701935
通過令牌方式登錄
image-20200309001815374

image-20200309001843108

Dashboard提供了可以實現集群管理、工作負載、服務發現和負載均衡、存儲、字典配置、日志視圖等功能。

十二、集群高可用測試

本節內容都在client端完成

1. 組件所在節點查看

通過ip查看apiserver所在節點,通過leader-elect查看scheduler和controller-manager所在節點:

[root@master01 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160 
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_6caf8003-052f-451d-8dce-4516825213ad","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:23Z","renewTime":"2020-01-03T07:57:55Z","leaderTransitions":2}' [root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_720d65f9-e425-4058-95d7-e5478ac951f7","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:20Z","renewTime":"2020-01-03T07:58:03Z","leaderTransitions":2}' 

image-20200309001920282

組件名 所在節點
apiserver master01
controller-manager master01
scheduler master01

2. master01關機

2.1 關閉master01,模擬宕機

[root@master01 ~]# init 0 

2.2 各組件查看

vip飄到了master02

[root@master02 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160 

controller-manager和scheduler也發生了遷移

[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_b3353e8f-a02f-4322-bf17-2f596cd25ba5","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:42Z","renewTime":"2020-01-03T08:06:36Z","leaderTransitions":3}' [root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_e0a2ec66-c415-44ae-871c-18c73258dc8f","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:56Z","renewTime":"2020-01-03T08:06:45Z","leaderTransitions":3}' 
組件名 所在節點
apiserver master02
controller-manager master02
scheduler master03

2.3 集群功能性測試

查詢:

[root@client ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady master 22h v1.16.4 master02 Ready master 22h v1.16.4 master03 Ready master 22h v1.16.4 work01 Ready <none> 22h v1.16.4 work02 Ready <none> 22h v1.16.4 work03 Ready <none> 22h v1.16.4 

image-20200309002007481

master01狀態為NotReady

新建pod:

[root@client ~]# more nginx-master.yaml apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API kind: Deployment #創建資源類型為Deployment metadata: #該資源元數據 name: nginx-master #Deployment名稱 spec: #Deployment的規格說明 selector: matchLabels: app: nginx replicas: 3 #指定副本數為3 template: #定義Pod的模板 metadata: #定義Pod的元數據 labels: #定義label(標簽) app: nginx #label的key和value分別為app和nginx spec: #Pod的規格說明 containers: - name: nginx #容器的名稱 image: nginx:latest #創建容器所使用的鏡像 [root@client ~]# kubectl apply -f nginx-master.yaml deployment.apps/nginx-master created [root@client ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s 10.244.5.6 work03 <none> <none> nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s 10.244.3.3 work01 <none> <none> nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s 10.244.4.5 work02 <none> <none> 

image-20200309002041418

2.4 結論

當有一個control plane節點宕機時,VIP會發生漂移,集群各項功能不受影響。

3. master02關機

在關閉master01的同時關閉master02,測試集群還能否正常對外服務。

3.1 關閉master02:

[root@master02 ~]# init 0 

3.2 查看VIP:

[root@master03 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160 

vip漂移至唯一的control plane:master03

3.3 集群功能測試

[root@client ~]# kubectl get nodes Error from server: etcdserver: request timed out [root@client ~]# kubectl get nodes The connection to the server 172.27.34.130:6443 was refused - did you specify the right host or port? 

etcd集群崩潰,整個k8s集群也不能正常對外服務。

 

 

單節點版k8s集群部署詳見:Centos7.6部署k8s(v1.14.2)集群
k8s集群高可用部署詳見:lvs+keepalived部署k8s v1.16.4高可用集群

 

本文所有腳本和配置文件已上傳:Centos7.6-install-k8s-v1.16.4-HA-cluster

 

 

image-20200309002139092


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM