k8s集群高可用(kubeadm)


1.下文需要的yaml文件所在的github地址:

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

下面實驗用到yaml文件大家需要從上面的github上clone和下載到本地,然后把yaml文件傳到k8s集群的master節點,如果直接復制粘貼格式可能會有問題。

正文

一、准備實驗環境

1.准備四台centos7虛擬機,用來安裝k8s集群

二、初始化實驗環境

1.配置靜態ip

1.1 在master1節點配置網絡(其他三台類似)

修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,變成如下:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.0.6
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE=yes
NAME=ens33
DEVICE=ens33
ONBOOT=yes

2.修改yum源,各個節點操作

(1)備份原來的yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum makecache fast

(2)配置安裝k8s需要的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF yum clean all yum makecache fast yum -y update yum -y install yum-utils device-mapper-persistent-data lvm2

 (3)添加新的軟件源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum clean all
yum makecache fast

3.安裝基礎軟件包,各個節點操作

yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate

 4.關閉firewalld防火牆,各個節點操作,centos7系統默認使用的是firewalld防火牆,停止firewalld防火牆,並禁用這個服務

 systemctl stop firewalld  && systemctl  disable  firewalld

5.安裝iptables,各個節點操作,如果你用firewalld不是很習慣,可以安裝iptables,這個步驟可以不做,根據大家實際需求

5.1 安裝iptables

yum install iptables-services -y

5.2 禁用iptables

service iptables stop   && systemctl disable iptables

6.時間同步,各個節點操作

6.1 時間同步

ntpdate cn.pool.ntp.org

6.2 編輯計划任務,每小時做一次同步

1)crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

2)重啟crond服務進程:
service crond restart

7. 關閉selinux,各個節點操作

 關閉selinux

sed -i  's/SELINUX=enforcing/SELINUX=disabled/'  /etc/sysconfig/selinux
sed -i  's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config
reboot -f

8.關閉交換分區,各個節點操作

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

9.修改內核參數,各個節點操作

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

10.修改主機名

hostnamectl set-hostname master1

 11.配置hosts文件,各個節點操作

 在/etc/hosts文件增加如下幾行:

192.168.0.6  master1
192.168.0.16 master2
192.168.0.26 master3
192.168.0.56 node1

12.配置master1到node無密碼登陸,配置master1到master2、master3無密碼登陸

在master1上操作

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub root@master2
ssh-copy-id -i .ssh/id_rsa.pubroot@master3
ssh-copy-id -i .ssh/id_rsa.pubroot@node1

三、安裝kubernetes1.18.2高可用集群

1.安裝docker19.03,各個節點操作

1.1 查看支持的docker版本

yum list docker-ce --showduplicates |sort -r

1.2 安裝19.03.7版本 

yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker

1.3 修改docker配置文件

cat > /etc/docker/daemon.json <<EOF
{ "exec-opts": ["native.cgroupdriver=systemd"], 
"log-driver": "json-file",
"log-opts": {   
"max-size": "100m"  
}, 
"storage-driver": "overlay2",
"storage-opts": [   
"overlay2.override_kernel_check=true" 
]
}
EOF

1.4 重啟docker使配置生效

systemctl daemon-reload && systemctl restart docker

1.5 設置網橋包經IPTables,core文件生成路徑,配置永久生效

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables

echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1

""" > /etc/sysctl.conf

sysctl -p

1.6 開啟ipvs,不開啟ipvs將會使用iptables,但是效率低,所以官網推薦需要開通ipvs內核

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do 
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then 
/sbin/modprobe \${kernel_module} 
fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

2.安裝kubernetes1.18.2

2.1在master1、master2、master3和node1上安裝kubeadm和kubelet

yum install kubeadm-1.18.2 kubelet-1.18.2 -y

systemctl enable kubelet

2.2上傳鏡像到master1、master2、master3和node1節點之后,按如下方法通過docker load -i手動解壓鏡像,鏡像在百度網盤

docker load -i   1-18-kube-apiserver.tar.gz
docker load -i   1-18-kube-scheduler.tar.gz
docker load -i   1-18-kube-controller-manager.tar.gz
docker load -i   1-18-pause.tar.gz
docker load -i   1-18-cordns.tar.gz
docker load -i   1-18-etcd.tar.gz
docker load -i   1-18-kube-proxy.tar.gz

說明:

pause版本是3.2,用到的鏡像是k8s.gcr.io/pause:3.2

etcd版本是3.4.3,用到的鏡像是k8s.gcr.io/etcd:3.4.3-0        

cordns版本是1.6.7,用到的鏡像是k8s.gcr.io/coredns:1.6.7

 apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的鏡像分別是

k8s.gcr.io/kube-apiserver:v1.18.2

k8s.gcr.io/kube-controller-manager:v1.18.2

k8s.gcr.io/kube-scheduler:v1.18.2

k8s.gcr.io/kube-proxy:v1.18.2

2.3 部署keepalive+lvs實現master節點高可用-對apiserver做高可用

(1)部署keepalived+lvs,在各master節點操作

yum install -y socat keepalived ipvsadm conntrack

(2)修改master1的keepalived.conf文件,按如下修改

vim  /etc/keepalived/keepalived.conf
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 100 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }

(3)修改master2的keepalived.conf文件,按如下修改

vim /etc/keepalived/keepalived.conf
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 50 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }

(4)修改master3的keepalived.conf文件,按如下修改

修改/etc/keepalived/keepalived.conf

global_defs {
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state BACKUP
    nopreempt
    interface ens33
    virtual_router_id 80
    priority 30
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass just0kk
    }
    virtual_ipaddress {
        192.168.0.199
    }
}
virtual_server 192.168.0.199 6443 {
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    net_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP
    real_server 192.168.0.6 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.0.16 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.0.26 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

重要知識點,必看,否則生產會遇到巨大的坑

keepalive需要配置BACKUP,而且是非搶占模式nopreempt,假設master1宕機,
啟動之后vip不會自動漂移到master1,這樣可以保證k8s集群始終處於正常狀態,
因為假設master1啟動,apiserver等組件不會立刻運行,如果vip漂移到master1,
那么整個集群就會掛掉,這就是為什么我們需要配置成非搶占模式了

 啟動順序master1->master2->master3,在master1、master2、master3依次執行如下命令

systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived 

2.4 在master1節點初始化k8s集群,在master1上操作如下

如果按照我在2.2節手動上傳鏡像到各個節點那么用下面的yaml文件初始化,大家統一按照這種方法上傳鏡像到各個機器,手動解壓,這樣后面實驗才會正常進行。

cat   kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
controlPlaneEndpoint: 192.168.0.199:6443
apiServer:
 certSANs:
 - 192.168.0.6
 - 192.168.0.16
 - 192.168.0.26
 - 192.168.0.56
 - 192.168.0.199
networking:
 podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

 kubeadm init --config kubeadm-config.yaml

注:如果沒有按照2.2節的方法上傳鏡像到各個節點,那么用下面的yaml文件,多了imageRepository: registry.aliyuncs.com/google_containers參數,表示走的是阿里雲鏡像,我們可以直接訪問,這個方法更簡單,但是在這里了解即可,先不使用這種方法,使用的話在后面手動加節點到k8s集群會有問題。

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
controlPlaneEndpoint: 192.168.0.199:6443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
 certSANs:
 - 192.168.0.6
 - 192.168.0.16
 - 192.168.0.26
 - 192.168.0.56
 - 192.168.0.199
networking:
 podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

kubeadm init --config kubeadm-config.yaml初始化命令執行成功之后顯示如下內容,說明初始化成功了

To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

注:kubeadm join ... 這條命令需要記住,我們把k8s的master2、master3,node1節點加入到集群需要在這些節點節點輸入這條命令,每次執行這個結果都是不一樣的,大家記住自己執行的結果,在下面會用到

2.5 在master1節點執行如下,這樣才能有權限操作k8s資源

mkdir -p $HOME/.kube
sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config
sudo chown $(id -u):$(id -g)  $HOME/.kube/config

在master1節點執行

kubectl get nodes

顯示如下,master1節點是NotReady

NAME          STATUS      ROLES        AGE     VERSION
master1   NotReady   master   8m11s   v1.18.2

kubectl get pods -n kube-system

顯示如下,可看到cordns也是處於pending狀態

coredns-7ff77c879f-j48h6         0/1     Pending  0          3m16
scoredns-7ff77c879f-lrb77        0/1     Pending  0         3m16s

上面可以看到STATUS狀態是NotReady,cordns是pending,是因為沒有安裝網絡插件,需要安裝calico或者flannel,接下來我們安裝calico,在master1節點安裝calico網絡插件:

安裝calico需要的鏡像是quay.io/calico/cni:v3.5.3和quay.io/calico/node:v3.5.3,鏡像在文章開頭處的百度網盤地址

手動上傳上面兩個鏡像的壓縮包到各個節點,通過docker load -i解壓

docker load -i   cni.tar.gz
docker load -i   calico-node.tar.gz

在master1節點執行如下:

kubectl apply -f calico.yaml

calico.yaml文件內容在如下提供的地址,打開下面鏈接可復制內容:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml

如果打不開上面的鏈接,可以訪問下面的github地址,把下面的目錄clone和下載下來,解壓之后,在把文件傳到master1節點即可

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

在master1節點執行

kubectl get nodes

顯示如下,看到STATUS是Ready

NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   98m   v1.18.2

kubectl get pods -n kube-system

看到cordns也是running狀態,說明master1節點的calico安裝完成

NAME                              READY   STATUS    RESTARTS   AGE
calico-node-6rvqm                 1/1     Running   0          17m
coredns-7ff77c879f-j48h6          1/1     Running   0          97m
coredns-7ff77c879f-lrb77          1/1     Running   0          97m
etcd-master1                      1/1     Running   0          97m
kube-apiserver-master1            1/1     Running   0          97m
kube-controller-manager-master1   1/1     Running   0          97m
kube-proxy-njft6                  1/1     Running   0          97m
kube-scheduler-master1            1/1     Running   0          97m

2.6 把master1節點的證書拷貝到master2和master3上

(1)在master2和master3上創建證書存放目錄

cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/

(2)在master1節點把證書拷貝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行復制,這樣不會出錯:

scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/   (這一步需要創建文件夾/etc/kubernetes/pki/etcd/)
scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernete/pki/etcd/
scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/

證書拷貝之后在master2和master3上執行如下命令,大家復制自己的,這樣就可以把master2和master3加入到集群 

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25 

--control-plane:這個參數表示加入到k8s集群的是master節點

在master2和master3上操作:

   mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g)$HOME/.kube/config

2.7 把node1節點加入到k8s集群,在node1節點操作

kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25  

注:上面的這個加入到k8s節點的一串命令kubeadm join就是在2.4初始化的時候生成的

2.8 在master1節點查看集群節點狀態

kubectl get nodes  

顯示如下

說明node1節點也加入到k8s集群了,通過以上就完成了k8s多master高可用集群的搭建

主機名沒有統一,加入以后修改,參照https://mp.weixin.qq.com/s/Na5Ic3gRKS6YjVDQ_6UJRg

2.9 安裝traefik

把traefik鏡像上傳到各個節點,按照如下方法通過docker load -i解壓,鏡像地址在文章開頭處的百度網盤里,可自行下載

docker load -i  traefik_1_7_9.tar.gz

traefik用到的鏡像是k8s.gcr.io/traefik:1.7.9

1)生成traefik證書,在master1上操作

mkdir  ~/ikube/tls/ -p

echo """

[req]

distinguished_name = req_distinguished_name

prompt = yes

[ req_distinguished_name ]

countryName                     = Country Name (2 letter code)

countryName_value               = CN

stateOrProvinceName             = State or Province Name (full name)

stateOrProvinceName_value       = Beijing

localityName                    = Locality Name (eg, city)

localityName_value              = Haidian

organizationName                = Organization Name (eg, company)

organizationName_value          = Channelsoft

organizationalUnitName          = Organizational Unit Name (eg, section)

organizationalUnitName_value    = R & D Department

commonName                      = Common Name (eg, your name or your server\'s hostname)

commonName_value                = *.multi.io

emailAddress                    = Email Address

emailAddress_value              = lentil1016@gmail.com

""" > ~/ikube/tls/openssl.cnf

openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key

kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key

2)執行yaml文件,創建traefik

kubectl apply -f traefik.yaml

traefik.yaml文件內容在如下鏈接地址處復制:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/traefik.yaml

3)查看traefik是否部署成功:

kubectl get pods -n kube-system

顯示如下,說明部署成功

traefik-ingress-controller-csbp8   1/1     Running   0     5s
traefik-ingress-controller-hqkwf   1/1     Running   0     5s
traefik-ingress-controller-wtjqd   1/1     Running   0     5s

3.安裝kubernetes-dashboard 2.0版本(kubernetes的web ui界面)

把kubernetes-dashboard鏡像上傳到各個節點,按照如下方法通過docker load -i解壓,鏡像地址在文章開頭處的百度網盤里,可自行下載

docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz

在master1節點操作

kubectl apply -f kubernetes-dashboard.yaml

kubernetes-dashboard.yaml文件內容在如下鏈接地址處復制https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml

上面如果訪問不了,可以訪問下面的鏈接,然后把下面的分支克隆和下載,手動把yaml文件傳到master1上即可:

https://github.com/luckylucky421/kubernetes1.17.3

查看dashboard是否安裝成功:

kubectl get pods -n kubernetes-dashboard

顯示如下,說明dashboard安裝成功了

NAME                                         READY   STATUS    RESTARTS   AGE  
dashboard-metrics-scraper-694557449d-8xmtf   1/1     Running   0          60s   
kubernetes-dashboard-5f98bdb684-ph9wg        1/1     Running   2          60s   

查看dashboard前端的service

kubectl get svc -n kubernetes-dashboard

顯示如下:

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE   
dashboard-metrics-scraper   ClusterIP   10.100.23.9      <none>        8000/TCP   50s   
kubernetes-dashboard        ClusterIP   10.105.253.155   <none>        443/TCP    50s 

修改service type類型變成NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

把 type: ClusterIP變成 type: NodePort,保存退出即可

kubectl get svc -n kube-system

顯示如下:

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.23.9      <none>        8000/TCP        3m59s
kubernetes-dashboard        NodePort    10.105.253.155   <none>        443:31175/TCP   4m

上面可看到service類型是NodePort,訪問master1節點ip:31175端口即可訪問kubernetes dashboard,我的環境需要輸入如下地址

 https://192.168.0.6:31775/

可看到出現了dashboard界面

3.1通過yaml文件里指定的默認的token登陸dashboard

1)查看kubernetes-dashboard名稱空間下的secret

kubectl get secret -n kubernetes-dashboard

顯示如下:

NAME                               TYPE                                  DATA   AGE
default-token-vxd7t                kubernetes.io/service-account-token   3      5m27s
kubernetes-dashboard-certs         Opaque                                0      5m27s
kubernetes-dashboard-csrf          Opaque                                1      5m27s
kubernetes-dashboard-key-holder    Opaque                                2      5m27s
kubernetes-dashboard-token-ngcmg   kubernetes.io/service-account-token   3      5m27s

2)找到對應的帶有token的kubernetes-dashboard-token-ngcmg

kubectl  describe  secret  kubernetes-dashboard-token-ngcmg  -n   kubernetes-dashboard

顯示如下:

...

...
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

記住token后面的值,把下面的token值復制到瀏覽器token登陸處即可登陸:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

 

 點擊sing in登陸,顯示如下,默認是只能看到default名稱空間內容

3.2 創建管理員token,可查看任何空間權限

kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard

1)查看kubernetes-dashboard名稱空間下的secret

kubectl get secret -n kubernetes-dashboard

顯示如下:

NAME                               TYPE                                  DATA   AGE
default-token-vxd7t                kubernetes.io/service-account-token   3      5m27s
kubernetes-dashboard-certs         Opaque                                0      5m27s
kubernetes-dashboard-csrf          Opaque                                1      5m27s
kubernetes-dashboard-key-holder    Opaque                                2      5m27s
kubernetes-dashboard-token-ngcmg   kubernetes.io/service-account-token   3   

2)找到對應的帶有token的kubernetes-dashboard-token-ngcmg

kubectl  describe  secret  kubernetes-dashboard-token-ngcmg  -n   kubernetes-dashboard

顯示如下:

...

...
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

記住token后面的值,把下面的token值復制到瀏覽器token登陸處即可登陸:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

4.安裝metrics監控相關的插件

把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz鏡像上傳到各個節點,按照如下方法通過docker load -i解壓

docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz

metrics-server版本0.3.1,用到的鏡像是k8s.gcr.io/metrics-server-amd64:v0.3.1       

addon-resizer版本是1.8.4,用到的鏡像是k8s.gcr.io/addon-resizer:1.8.4

在k8s-master節點操作

kubectl apply -f metrics.yaml

metrics.yaml文件內容在如下鏈接地址處復制

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml

上面如果訪問不了,可以訪問下面的鏈接,然后把下面的分支克隆和下載,手動把yaml文件傳到master1上就可以正常使用了:

https://github.com/luckylucky421/kubernetes1.17.3

上面組件都安裝之后,kubectl  get  pods  -n kube-system  -o wide,查看組件安裝是否正常,STATUS狀態是Running,說明組件正常,如下所示

NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-6rvqm                       1/1     Running   10         14h
calico-node-cbrvw                       1/1     Running   4          14h
calico-node-l6628                       0/1     Running   0          9h
coredns-7ff77c879f-j48h6                1/1     Running   2          16h
coredns-7ff77c879f-lrb77                1/1     Running   2          16h
etcd-master1                            1/1     Running   37         16h
etcd-master2                            1/1     Running   7          9h
kube-apiserver-master1                  1/1     Running   52         16h
kube-apiserver-master2                  1/1     Running   11         14h
kube-controller-manager-master1         1/1     Running   42         16h
kube-controller-manager-master2         1/1     Running   13         14h
kube-proxy-dq6vc                        1/1     Running   2          14h
kube-proxy-njft6                        1/1     Running   2          16h
kube-proxy-stv52                        1/1     Running   0          9h
kube-scheduler-master1                  1/1     Running   37         16h
kube-scheduler-master2                  1/1     Running   15         14h
kubernetes-dashboard-85f499b587-dbf72   1/1     Running   1          8h
metrics-server-8459f8db8c-5p59m         2/2     Running   0          33s
traefik-ingress-controller-csbp8        1/1     Running   0          8h
traefik-ingress-controller-hqkwf        1/1     Running   0          8h
traefik-ingress-controller-wtjqd        1/1     Running   0          8h

 

https://mp.weixin.qq.com/s/FZkATvHxcZuhqx5Hsxd1OA

https://mp.weixin.qq.com/s/UuyhPhe15sV8D4lApe7X7w   文章詳細



 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM