kubeadm搭建單master-多node節點k8s集群
一、環境規划
1.1、實驗環境規划
K8S集群角色 | Ip | 主機名 | 安裝的組件 |
---|---|---|---|
控制節點 | 192.168.40.180 | k8s-master1 | apiserver、controller-manager、scheduler、etcd、docker、calico、kube-proxy |
工作節點 | 192.168.40.181 | k8s-node1 | kubelet、kube-proxy、docker、calico、coredns |
工作節點 | 192.168.40.182 | k8s-node2 | kubelet、kube-proxy、docker、calico、coredns |
實驗環境規划:
- 操作系統:centos7.6
- 配置: 4Gib內存/4vCPU/100G硬盤
- 網絡:Vmware NAT模式
k8s網絡環境規划:
-
k8s版本:
v1.20.6
-
Pod網段:
10.244.0.0/16
-
Service網段:
10.10.0.0/16
1.2、節點初始化
1)配置靜態ip地址
# 把虛擬機或者物理機配置成靜態ip地址,這樣機器重新啟動后ip地址也不會發生改變。以master1主機修改靜態IP為例
~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.40.180 # 按實驗規划修改
NETMASK=255.255.255.0
GATEWAY=192.168.40.2
DNS1=223.5.5.5
# 重啟網絡
~]# systemctl restart network
# 測試網絡連通信
~]# ping baidu.com
PING baidu.com (39.156.69.79) 56(84) bytes of data.
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=63.2 ms
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=47.3 ms
2)配置主機名
~]# hostnamectl set-hostname <主機名> && bash
3)配置hosts文件
# 所有機器
cat >> /etc/hosts << EOF
192.168.40.180 k8s-master1
192.168.40.181 k8s-node1
192.168.40.182 k8s-node2
EOF
# 測試
~]# ping k8s-master1
PING k8s-master1 (192.168.40.180) 56(84) bytes of data.
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=2 ttl=64 time=0.047 ms
4)配置主機之間無密碼登錄
# 生成ssh 密鑰對,一路回車,不輸入密碼
ssh-keygen -t rsa
# 把本地的ssh公鑰文件安裝到遠程主機對應的賬戶
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2
5)關閉firewalld防火牆
systemctl stop firewalld && systemctl disable firewalld
6)關閉selinux
# 臨時關閉
setenforce 0
# 永久關閉
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 查看
getenforce
7)關閉交換分區swap
#臨時關閉
swapoff -a
#永久關閉:注釋swap掛載,給swap開頭加一下注釋
sed -ri 's/.*swap.*/#&/' /etc/fstab
#注意:如果是克隆的虛擬機,需要刪除UUID一行
8)修改內核參數
# 1、加載br_netfilter模塊
modprobe br_netfilter
# 2、驗證模塊是否加載成功
lsmod |grep br_netfilter
# 3、修改內核參數
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 4、使剛才修改的內核參數生效
sysctl -p /etc/sysctl.d/k8s.conf
9)配置阿里雲repo源
# 備份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下載新的CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 生成緩存
yum clean all && yum makecache
10)配置時間同步
# 安裝ntpdate命令,
yum install ntpdate -y
# 跟網絡源做同步
ntpdate cn.pool.ntp.org
# 把時間同步做成計划任務
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
# 重啟crond服務
service crond restart
11)安裝iptables
# 安裝iptables
yum install iptables-services -y
# 禁用iptables
service iptables stop && systemctl disable iptables
# 清空防火牆規則
iptables -F
12)開啟ipvs
不開啟ipvs將會使用iptables進行數據包轉發,但是效率低,所以官網推薦需要開通ipvs。
# 創建ipvs.modules文件
~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 執行腳本
~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
13)安裝基礎軟件包
~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync
14)安裝docker-ce
~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# yum install docker-ce docker-ce-cli containerd.io -y
~]# systemctl start docker && systemctl enable docker.service && systemctl status docker
15)配置docker鏡像加速器
# 注意:修改docker文件驅動為systemd,默認為cgroupfs,kubelet默認使用systemd,兩者必須一致才可以
~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
二、kubeadm部署集群
2.1、配置kubernetes的repo源
[root@k8s-master1 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
# 將k8s-master1上Kubernetes的repo源復制給k8s-node1和k8s-node2
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
2.2、安裝初始化需要的軟件包
[root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-master1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-master1 ~]# systemctl status kubelet
[root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet
[root@k8s-node2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node2 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-node2 ~]# systemctl status kubelet
2.3、kubeadm初始化k8s集群
1)kubeadm初始化
[root@k8s-master1 ~]# kubeadm init --kubernetes-version=1.20.6 --apiserver-advertise-address=192.168.40.180 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 92.005918 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jybm37.w3g3mx8qc73hypm3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.40.180:6443 --token jybm37.w3g3mx8qc73hypm3 \
--discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
2)配置kubectl的配置文件config
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 2m11s v1.20.6
# 此時集群狀態還是NotReady狀態,因為沒有安裝網絡插件。
2.4、擴容集群-添加第一個node節點
# 1.在k8s-master1上查看加入節點的命令:
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.180:6443 --token mwk781.dqzihv2yt97f4v6v --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
# 2.把k8s-node1加入k8s集群:
[root@k8s-node1 ~]# kubeadm join 192.168.40.180:6443 --token mwk781.dqzihv2yt97f4v6v --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 3.在k8s-master1上查看集群節點狀況
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 11m v1.20.6
k8s-node1 NotReady <none> 58s v1.20.6
2.5、擴容集群-添加第二個node節點
# 1.在k8s-master1上查看加入節點的命令:
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.180:6443 --token lz5xqh.b9u5o7o0ndn25gn1 --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
# 2.把k8s-node2加入k8s集群:
[root@k8s-node2 ~]# kubeadm join 192.168.40.180:6443 --token lz5xqh.b9u5o7o0ndn25gn1 --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 3.在k8s-master1上查看集群節點狀況
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 13m v1.20.6
k8s-node1 NotReady <none> 3m6s v1.20.6
k8s-node2 NotReady <none> 22s v1.20.6
# 4.給節點打標簽
[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker
node/k8s-node2 labeled
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 14m v1.20.6
k8s-node1 NotReady worker 3m48s v1.20.6
k8s-node2 NotReady worker 64s v1.20.6
2.6、部署Calico
配置文件地址:https://docs.projectcalico.org/manifests/calico.yaml
# 上傳calico.yaml到k8s-master1上,使用yaml文件安裝calico 網絡插件 。
[root@k8s-master1 ~]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-9t9k8 1/1 Running 0 34s 10.244.159.129 k8s-master1 <none> <none>
calico-node-66b47 1/1 Running 0 35s 192.168.40.180 k8s-master1 <none> <none>
calico-node-6svrr 1/1 Running 0 35s 192.168.40.182 k8s-node2 <none> <none>
calico-node-zgnkl 1/1 Running 0 35s 192.168.40.181 k8s-node1 <none> <none>
coredns-7f89b7bc75-4jvmv 1/1 Running 0 28m 10.244.36.65 k8s-node1 <none> <none>
coredns-7f89b7bc75-zr5mf 1/1 Running 0 28m 10.244.169.129 k8s-node2 <none> <none>
etcd-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-proxy-8fzc4 1/1 Running 0 15m 192.168.40.182 k8s-node2 <none> <none>
kube-proxy-n2v4j 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-proxy-r9ccp 1/1 Running 0 17m 192.168.40.181 k8s-node1 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 28m v1.20.6
k8s-node1 Ready worker 18m v1.20.6
k8s-node2 Ready worker 15m v1.20.6
# 測試網絡連通性
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=127 time=43.188 ms
64 bytes from 39.156.69.79: seq=1 ttl=127 time=38.878 ms
2.7、測試部署tomcat服務
[root@k8s-master1 work]# cat tomcat.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: default
labels:
app: myapp
env: dev
spec:
containers:
- name: tomcat-pod-java
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine
imagePullPolicy: IfNotPresent
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
[root@k8s-master1 work]# cat tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
[root@k8s-master1 work]# kubectl apply -f tomcat.yaml
pod/demo-pod created
[root@k8s-master1 work]# kubectl apply -f tomcat-service.yaml
service/tomcat created
[root@k8s-master1 work]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 2/2 Running 0 102s
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
tomcat NodePort 10.106.85.230 <none> 8080:30080/TCP 21s
瀏覽器訪問測試:
2.8、測試coredns服務
# busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup會解析不到dns和ip
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
三、dashboard部署
3.1、安裝dashboard
1)部署yaml文件並查看
[root@k8s-master1 ~]# kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7445d59dfd-rks7c 1/1 Running 0 115s
kubernetes-dashboard-54f5b6dc4b-mnnd2 1/1 Running 0 115s
# 查看dashboard前端的service
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.111.106.98 <none> 8000/TCP 3m2s
kubernetes-dashboard ClusterIP 10.98.164.1 <none> 443/TCP 3m2s
2)修改service type類型變成NodePort
# 把type: ClusterIP變成 type: NodePort,保存退出即可
[root@k8s-master1 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.111.106.98 <none> 8000/TCP 6m1s
kubernetes-dashboard NodePort 10.98.164.1 <none> 443:30379/TCP 6m1s
3)瀏覽器訪問
上面可看到service類型是NodePort
,訪問任何一個工作節點ip: 30379端口即可訪問kubernetes dashboard,在瀏覽器(使用火狐瀏覽器)訪問如下地址:
https://192.168.40.180:30379
3.2、通過token訪問dashboard
# 1.創建管理員token,具有查看任何空間的權限,可以管理所有資源對象
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
# 2.查看kubernetes-dashboard名稱空間下的secret
[root@k8s-master1 ~]# kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-fppc9 kubernetes.io/service-account-token 3 19m
kubernetes-dashboard-certs Opaque 0 19m
kubernetes-dashboard-csrf Opaque 1 19m
kubernetes-dashboard-key-holder Opaque 2 19m
kubernetes-dashboard-token-bzx6g kubernetes.io/service-account-token 3 19m
# 3.找到對應的帶有token的kubernetes-dashboard-token-bzx6g
[root@k8s-master1 ~]# kubectl describe secret kubernetes-dashboard-token-bzx6g -n kubernetes-dashboard
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
# 4.通過token訪問dashboard
3.3、通過kubeconfig文件訪問dashboard
# 1、創建cluster集群
[root@k8s-master1 ~]# cd /etc/kubernetes/pki
[root@k8s-master1 pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.40.180:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf
# 2、創建credentials:需要使用上面的kubernetes-dashboard-token-bzx6g對應的token信息
[root@k8s-master1 pki]# DEF_NS_ADMIN_TOKEN=$(kubectl get secret kubernetes-dashboard-token-bzx6g -n kubernetes-dashboard -o jsonpath={.data.token}|base64 -d)
[root@k8s-master1 pki]# echo $DEF_NS_ADMIN_TOKEN
eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
[root@k8s-master1 pki]# kubectl config set-credentials dashboard-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf
# 3、創建context
[root@k8s-master1 pki]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf
# 4、切換context的current-context是dashboard-admin@kubernetes
[root@k8s-master1 pki]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf
# 5、查看生成的文件dashboard-admin.conf
[root@k8s-master1 pki]# cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EY3dPREV5TWpJeE5Gb1hEVE14TURjd05qRXlNakl4TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmU3CnRYdTBaRk1RUnZRcGtUVExxN1dEdnFBeDIwblkxSUR1WHlGWmR6VElsREtYWGFpTjdUNFp4dnVKdWRETFJjdk8KcFFHTjlQR0d5bTM2b05GRWo2RDVhek9xWGJJTHp4N2IrODRQV1VnTFhSd1IvYzRReG8vYzNYNmZLWFJucnVaeApVN1BJMDViVzlzeUVrVk1kM3ZpT25iQnVYTDBpNDViRGlzVHlZNUdRZGZTK3c3eGVxTWVoclV6N04vMUtlV2JLCkF0ZnZkUXJWUTlDT3hFVGcwRWRjbUt5R0RDc0JrVUhLY3BQZ1RidXVuUGZ2bm1yWWRsNWtlZmFHMWkzR1ZPY1oKdWhVQVpCck4xaWNocUsrV0Q2a3NTLzQwLzg3Nlg3WlQyeFNPbVJxTUQyVGlHQzJvWlRhclRQOE9VUVFqc0ZkdwplNUNlaEFXKzZHS3BCOTd6KzJFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCN2JxeGV1WFdYUVg5UDhJenRCbS9sWVRHS3hNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDUGN1dUdLa1QwemdRa0x4S2JrTU9pbGQ2akYvNENyYklEaG5SeDk4dEkya1EvNzVXbQpaNURoeldnKytrcUdoQUVSZXFoMVd4MXNHV0RTaG41elJScmNNT1BOOVBmdVpJcmVUUUllL0tuZDdTMXZyNUxGCk80NlE5QXEwVlZYSU5kMEdZcmJPNURpaTdBc2Ewc0FwSk16RzZoRHZPYlFCRGh3RURxa3VkM2tlZ0xuNUZXTUwKdUZoU2Voa1F4VWxUOVJoRkhzemZxVnBsTGVpN05uT1dxR0xIOHhTSFdacTV3aFI1a1laYUpJblM0L1gwZVdnKwpGNXM0WWpVWWZHOHRNQTZLNTR6eFVJSnM0Nnd2ek9yOEVwUWlKLzh1SnhnM052aFpBZG1oTVMvRTNLTmF5TCtoClU0a2NNcUlxWUYyYzBqY1BJK0wxeHU4WkVHMCtXaWY0N2tYSAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.40.180:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
# 6、把dashboard-admin.conf復制到桌面,瀏覽器訪問時使用kubeconfig認證,把dashboard-admin.conf導入到web界面,就可以登陸了
3.4、通過kubernetes-dashboard創建容器
1)點開右上角紅色箭頭標注的 "+",如下圖所示
2)出現頁面中做如下配置
3)在dashboard的左側選擇Services
4)看到剛才創建的nginx的service在宿主機映射的端口是30094,在瀏覽器訪問:192.168.40.180:30094
四、metrics-server部署
metrics-server
是一個集群范圍內的資源數據集和工具,metrics-server只是顯示數據,並不提供數據存儲服務,主要關注的是資源度量API的實現,比如CPU、文件描述符、內存、請求延時等指標,metric-server收集數據給k8s集群內使用,如kubectl,hpa,scheduler等
4.1、安裝metrics-server
1)在/etc/kubernetes/manifests里面修改apiserver配置
注意:這個是k8s在1.17的新特性,如果是1.16版本的可以不用添加,1.17以后要添加。這個參數的作用是Aggregation允許在不修改Kubernetes核心代碼的同時擴展Kubernetes API。
[root@k8s-master1~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-aggregator-routing=true # 增加的內容
2)重新更新apiserver配置
[root@k8s-master1 ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6949477b58-9t9k8 1/1 Running 0 91m
calico-node-66b47 1/1 Running 0 91m
calico-node-6svrr 1/1 Running 0 91m
calico-node-zgnkl 1/1 Running 0 91m
coredns-7f89b7bc75-4jvmv 1/1 Running 0 119m
coredns-7f89b7bc75-zr5mf 1/1 Running 0 119m
etcd-k8s-master1 1/1 Running 0 119m
kube-apiserver 0/1 CrashLoopBackOff 1 24s # 刪除該pod
kube-apiserver-k8s-master1 1/1 Running 0 24s
kube-controller-manager-k8s-master1 1/1 Running 1 119m
kube-proxy-8fzc4 1/1 Running 0 106m
kube-proxy-n2v4j 1/1 Running 0 119m
kube-proxy-r9ccp 1/1 Running 0 108m
kube-scheduler-k8s-master1 1/1 Running 1 119m
# 把CrashLoopBackOff狀態的pod刪除
[root@k8s-master1 ~]# kubectl delete pods kube-apiserver -n kube-system
3)部署metrics-server
[root@k8s-master1 ~]# cat metrics.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-server-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.3.6
spec:
selector:
matchLabels:
k8s-app: metrics-server
version: v0.3.6
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
version: v0.3.6
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
ports:
- containerPort: 443
name: https
protocol: TCP
- name: metrics-server-nanny
image: k8s.gcr.io/addon-resizer:1.8.4
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 5m
memory: 50Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: metrics-server-config-volume
mountPath: /etc/config
command:
- /pod_nanny
- --config-dir=/etc/config
- --cpu=300m
- --extra-cpu=20m
- --memory=200Mi
- --extra-memory=10Mi
- --threshold=5
- --deployment=metrics-server
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
- --minClusterSize=2
volumes:
- name: metrics-server-config-volume
configMap:
name: metrics-server-config
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: https
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
[root@k8s-master1 ~]# kubectl apply -f metrics.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-6595f875d6-dx8w6 2/2 Running 0 8s
4.2、kubectl top命令
[root@k8s-master1 ~]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
calico-kube-controllers-6949477b58-9t9k8 4m 26Mi
calico-node-66b47 74m 82Mi
calico-node-6svrr 77m 98Mi
calico-node-zgnkl 83m 97Mi
coredns-7f89b7bc75-4jvmv 6m 50Mi
coredns-7f89b7bc75-zr5mf 7m 46Mi
etcd-k8s-master1 35m 54Mi
kube-apiserver-k8s-master1 118m 390Mi
kube-controller-manager-k8s-master1 37m 50Mi
kube-proxy-8fzc4 1m 14Mi
kube-proxy-n2v4j 1m 23Mi
kube-proxy-r9ccp 1m 15Mi
kube-scheduler-k8s-master1 7m 20Mi
metrics-server-6595f875d6-dx8w6 2m 16Mi
[root@k8s-master1 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master1 417m 20% 1282Mi 68%
k8s-node1 233m 5% 1612Mi 42%
k8s-node2 262m 6% 1575Mi 41%
五、其他問題
5.1、scheduler、controller-manager端口變成物理機可以監聽的端口
1)問題引出
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
默認在1.19之后10252和10251都是綁定在127.0.0.1的,如果想要通過prometheus監控,會采集不到數據,所以可以把端口綁定到物理機
2)kube-scheduler配置修改
[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=192.168.40.180
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.40.180
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.40.180
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
修改如下內容:
1) 把--bind-address=127.0.0.1變成--bind-address=192.168.40.180
2) 把httpGet:字段下的hosts由127.0.0.1變成192.168.40.180
3) 把—port=0刪除
#注意:192.168.40.180是k8s的控制節點k8s-master1的ip
3)kube-controller-manager配置修改
[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=192.168.40.180
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.40.180
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.40.180
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
status: {}
修改如下內容:
1) 把--bind-address=127.0.0.1變成--bind-address=192.168.40.180
2) 把httpGet:字段下的hosts由127.0.0.1變成192.168.40.180
3) 把—port=0刪除
#注意:192.168.40.180是k8s的控制節點k8s-master1的ip
4)重啟kubelet
[root@k8s-master1 ~]# systemctl restart kubelet
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master1 ~]# ss -antulp | grep :10251
tcp LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=122787,fd=7))
[root@k8s-master1 ~]# ss -antulp | grep :10252
tcp LISTEN 0 128 :::10252 :::* users:(("kube-controller",pid=125280,fd=7))