Kubernetes的安裝部署


前言:簡述kubernetes(k8s)集群


k8s集群基本功能組件由master和node組成。

 

master節點上主要有kube-apiserver、kube-scheduler、kube-controller-manager組件組成,及Etcd和pod網絡( flannel )。

1、master節點各組件介紹:


API Server(kube-apiserver):負責和用戶交互
API Server作為k8s的前端接口,各種客戶端工具以及k8s其他組件可以通過它管理集群的各種資源。(接受管理命令,對外接受命令結構)

 

Scheduler(kube-scheduler) :負責資源調度
scheduer負責決定將pod放在哪個node上運行。scheduler在調度時會充分考慮集群的架構,當前各個節點的負載,以及應用對高可用、性能、數據親和性的需求。

 

Controller Manager(kube-controller-manager):負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等。

Controller Manager由多種controller組成,包括replication controller、endpoints controller、namespace controller、serviceaccounts controller 等。
不同的controller管理不同的資源。例如replication controller管理Deployment、StatefulSet、DaemonSet的生命周期,namespace controller管理Namespace 資源。

 

etcd:負責保存k8s集群的配置信息和各種資源的狀態信息(類似數據庫)

K8S中所有的服務節點的信息數據、配置數據都是存儲在ETCD中,當數據發生變化時,etcd會快速的通知k8s相關組件。

 

pod網絡(如:flannel):pod網絡架構復制pod間相互通信
k8s集群必須掌握pod網絡, flannel是其中一個可選的方案。

 

2、node節點組件介紹:

kubelet是node的agent,當scheduler去確定在某個node上運行pod后,會將pod的具體配置信息發送給該節點的kubelet,kubelet會根據這些信息創建和運行容器,並向master報告運行狀態。

 

kube-proxy的工作職責為:service 接收到的請求時,proxy配合service實現從pod到service,以及從外部的node port到service的訪問。service在邏輯上代表了后端的多個Pod,外界通過service訪問Pod。
每個Node都會運行kube-proxy服務,它負責將訪問service的TCP/UPD數據流轉發到后端的容器。如果有多個副本,kube-proxy會實現負載均衡。

 

pod網絡
以上已介紹,不再贅述

下圖為k8s整體運行架構

 

 


正文:

一、K8s集群部署前的准備工作有:
1、規划環境
主機名/IP
k8smaster 192.168.217.61 
k8snode1 192.168.217.63 
k8snode2 192.168.217.64 

 

2、部署K8s需先在各節點安裝docker
cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce -y
mkdir /etc/docker
vim /etc/docker/daemon.json

{
"registry-mirrors": ["https://68rmyzg7.mirror.aliyuncs.com"]
}
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
docker version
docker info

 

3、配置k8s的yum源

vim k8s.repo

[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

 

4、安裝kubelet、kubeadm、kubectl工具
yum install kubelet kubeadm kubectl -y
此時因為集群還沒有配置起來,暫不啟動kubelet,僅設置開機自啟動
systemctl enable kubelet

 

5、k8s集群各節點基礎配置
(1)hosts文件修改

vim /etc/hosts
127.0.0.1 localhost
192.168.217.151 master
192.168.217.102 node1
192.168.217.103 node2

(2)在每個節點打開內置的橋功能
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

注意:需要安裝docker,並啟動docker服務,才會有/proc/sys/net/bridge/bridge-nf-call-iptables這個內存參數
systemctl restart docker
cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

(3)在每個節點禁用swap
swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0

修改/etc/fstab文件,注釋掉swap項內容
vim /etc/fstab
#UUID=100389a1-8f1b-4b19-baea-523a95849ee1 swap swap defaults 0 0

(4)關閉防火牆和selinux
vim /etc/selinux/config
SELINUX=disabled
setenforce 0

iptables -F
systemctl stop firewalld
systemctl disable firewalld

 

小結:至此,K8s集群部署前的准備工作完成。


正文:

二、在master節點上面初始化集群

1、查看安裝k8s的相關鏡像

kubeadm config print init-defaults

查看安裝k8s的相關信息,主要是安裝源和版本

kubeadm config images list --image-repository registry.aliyuncs.com/google_containers

默認是從k8s.gcr.io這個地址下載鏡像,此地址國內無法訪問,因此指定阿里雲鏡像站地址,查看需要下載的鏡像在阿里雲鏡像站中是否都有

示例:

[root@k8smaster ~]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
W1203 10:42:44.376170 5104 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.4
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0


2、開始初始化k8s集群

命令為:
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.4 --apiserver-advertise-address=192.169.217.61 --pod-network-cidr=10.244.0.0/16

命令解析:

--image-repository :指定哪個倉庫拉取鏡像(1.13版本才有的),默認值是k8s.gcr.io,這里指定為國內阿里雲鏡像地址。

--kubernetes-version=:指定kubernetes版本號,默認值下載最新的版本號,也可指定為固定版本(v1.19.4)來跳過網絡請求。

--apiserver-advertise-address=:指定Master網絡接口與集群的其他節點通信。如果Master有多個網絡接口,建議明確指定,如果不指定,kubeadm會自動選擇有默認網關的網絡接口設備。這里指定為本機ip

--pod-network-cidr=:指定Pod網絡的范圍。Kubernetes支持多種網絡方案,這里設置為10.244.0.0/16,是因為我們將使用flannel網絡方案,必須設置成這個CIDR。

 

執行命令后,等待k8s自動初始化,若成功,會顯示如下信息:

Your Kubernetes master has initialized successfully!

同時,提示下一步的操作,而如果失敗,請使用如下代碼清除后重新初始化:

# kubeadm reset
# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
# ifconfig cni0 down
# ip link delete cni0
# ifconfig flannel.1 down
# ip link delete flannel.1
# rm -rf /var/lib/cni/
# rm -rf /var/lib/etcd/*

這里提示,若出現報錯,需先執行kubeadm reset,然后重新初始化。

示例:

[root@k8smaster ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.4 --pod-network-cidr=10.244.0.0/16
W1207 15:50:10.490192 2290 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.217.61]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.217.61 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.217.61 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.009147 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wvq4ve.nh8wtthnsr38fija
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.217.61:6443 --token wvq4ve.nh8wtthnsr38fija \
--discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13 
注意:在24小時內記錄此驗證碼 kubeadm join 192.168.217.61:6443 --token wvq4ve.nh8wtthnsr38fija \
--discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13

之后添加節點需要!

 

按要求執行相關信息:
[root@k8smaster ~]# mkdir -p $HOME/.kube
[root@k8smaster ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8smaster ~]# chown $(id -u):$(id -g) $HOME/.kube/config

為了使用便捷,啟用kubectl命令的自動補全功能。
echo "source <(kubectl completion bash)" >> ~/.bashrc

驗證master節點:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster NotReady master 4m9s v1.19.4


3、安裝pod網絡
要讓Kubernetes集群能夠工作,必須安裝Pod網絡,否則Pod之間無法通信。

在master節點上安裝flannel網絡:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml(網站下載安裝不一定可執行,可以下載一個kube-flannel.yml文件自行安裝)

rz上傳文件kube-flannel.yml
kubectl apply -f kube-flannel.yml

 

完成后,在每個節點啟動kubelet,執行如下操作:
systemctl restart kubelet

等鏡像下載完成以后,看到node的狀態會變成ready,執行如下命令查看:
kubectl get nodes

示例:
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 64m v1.19.4
k8snode1 Ready <none> 18m v1.19.4
k8snode2 Ready <none> 13m v1.19.4

 

驗證pod信息
kubectl get pods -n kube-system

示例:
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-bdkb7 1/1 Running 0 94m
coredns-6d56c8448f-z4nlq 1/1 Running 0 94m
etcd-k8smaster 1/1 Running 0 94m
kube-apiserver-k8smaster 1/1 Running 0 94m
kube-controller-manager-k8smaster 1/1 Running 0 73m
kube-flannel-ds-amd64-9rj9n 1/1 Running 0 32m
kube-flannel-ds-amd64-lzklk 1/1 Running 0 32m
kube-flannel-ds-amd64-mt5q4 1/1 Running 0 32m
kube-proxy-55wch 1/1 Running 0 43m
kube-proxy-k6dg7 1/1 Running 0 94m
kube-proxy-t98kx 1/1 Running 0 48m
kube-scheduler-k8smaster 1/1 Running 0 72m

 

如果沒有安裝flannel網絡,STATUS會NotReady
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster NotReady master 18m v1.19.4

 

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused 
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused 
etcd-0 Healthy {"health":"true"}

這里需要處理小bug(1.18.5以后版本需要處理此bug)
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
刪除- --port=0
vim /etc/kubernetes/manifests/kube-scheduler.yaml
刪除- --port=0

再次查看
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok 
controller-manager Healthy ok 
etcd-0 Healthy {"health":"true"}


4、其他節點加入集群:

示例:
[root@k8snode1 ~]# kubeadm join 192.168.217.61:6443 --token wvq4ve.nh8wtthnsr38fija \
> --discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13 
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

提示--proc-sys-net-ipv4-ip_forward未設置為轉發狀態
echo "1" > /proc/sys/net/ipv4/ip_forward

 

另外注意,此時在node節點上執行kubeadm join 可能會因為之前的--token信息超時而改變(token是有24小時有效期的),需要在master節點上通過如下命令找回:
kubeadm token create --ttl 0 --print-join-command

W1207 16:37:39.128892 15664 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.217.61:6443 --token smfwoy.yix3lbaz436yltw5 --discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13

 

再次在node節點執行
kubeadm join 192.168.217.61:6443 --token smfwoy.yix3lbaz436yltw5 --discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13

[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

節點添加成功后,在master節點執行kubectl get nodes 查看狀態

示例:
[root@k8smaster app]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 64m v1.19.4
k8snode1 Ready <none> 18m v1.19.4
k8snode2 Ready <none> 13m v1.19.4


[root@k8smaster app]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-bdkb7 1/1 Running 0 94m
coredns-6d56c8448f-z4nlq 1/1 Running 0 94m
etcd-k8smaster 1/1 Running 0 94m
kube-apiserver-k8smaster 1/1 Running 0 94m
kube-controller-manager-k8smaster 1/1 Running 0 73m
kube-flannel-ds-amd64-9rj9n 1/1 Running 0 32m
kube-flannel-ds-amd64-lzklk 1/1 Running 0 32m
kube-flannel-ds-amd64-mt5q4 1/1 Running 0 32m
kube-proxy-55wch 1/1 Running 0 43m
kube-proxy-k6dg7 1/1 Running 0 94m
kube-proxy-t98kx 1/1 Running 0 48m
kube-scheduler-k8smaster 1/1 Running 0 72m

 

[root@k8smaster ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d56c8448f-bdkb7 1/1 Running 0 18h 10.244.2.2 k8snode2 <none> <none>
coredns-6d56c8448f-z4nlq 1/1 Running 0 18h 10.244.2.3 k8snode2 <none> <none>
etcd-k8smaster 1/1 Running 0 18h 192.168.217.61 k8smaster <none> <none>
kube-apiserver-k8smaster 1/1 Running 0 18h 192.168.217.61 k8smaster <none> <none>
kube-controller-manager-k8smaster 1/1 Running 1 18h 192.168.217.61 k8smaster <none> <none>
kube-flannel-ds-amd64-9rj9n 1/1 Running 0 17h 192.168.217.63 k8snode1 <none> <none>
kube-flannel-ds-amd64-lzklk 1/1 Running 0 17h 192.168.217.64 k8snode2 <none> <none>
kube-flannel-ds-amd64-mt5q4 1/1 Running 0 17h 192.168.217.61 k8smaster <none> <none>
kube-proxy-55wch 1/1 Running 0 17h 192.168.217.64 k8snode2 <none> <none>
kube-proxy-k6dg7 1/1 Running 0 18h 192.168.217.61 k8smaster <none> <none>
kube-proxy-t98kx 1/1 Running 0 17h 192.168.217.63 k8snode1 <none> <none>
kube-scheduler-k8smaster 1/1 Running 1 18h 192.168.217.61 k8smaster <none> <none>


5、刪除節點
步驟1:在master節點上,先將節點設置為維護模式(以k8snode1節點為例)
[root@k8smaster ~]# kubectl drain k8snode1 --delete-local-data --force --ignore-daemonsets
node/k8snode1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-9rj9n, kube-system/kube-proxy-t98kx
node/k8snode1 drained

步驟2:刪除節點,執行如下命令:
[root@k8smaster ~]# kubectl delete node k8snode1
node "k8snode1" deleted

步驟3:此時查看集群節點狀態:
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 19h v1.19.4
k8snode2 Ready <none> 18h v1.19.4
node1節點已經被刪除


若再想添加進來這個node,在需要添加進來的節點上執行操作:
步驟1:停掉kubelet
[root@k8snode1 ~]# systemctl stop kubelet

步驟2:刪除相關文件
[root@k8snode1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1208 10:54:38.764013 50664 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

[root@k8snode1 ~]# rm -rf /etc/cni/net.d

[root@k8snode1 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

步驟3:添加節點
[root@k8snode1 ~]# kubeadm join 192.168.217.61:6443 --token smfwoy.yix3lbaz436yltw5 --discovery-token-ca-cert-hash sha256:de6b5ec135657b53f766bddf9e18b47e0cca43339fba8b021ad2b49b1ea7be13
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


步驟4:此時在master節點上查看集群節點狀態:
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 19h v1.19.4
k8snode1 Ready <none> 15s v1.19.4
k8snode2 Ready <none> 18h v1.19.4


------------------------------
小結:
關於kubelet、kubeadm、kubectl幾個命令的區別
kubelet運行在所有的節點上,負責啟動pod和容器,以系統服務形式出現,可以被systemd管理。
systemctl start kubelet
systemctl enable kubelet

kubeadm是k8s集群構建管理工具:初始化、構建集群、查看k8s的相關鏡像、重置等。
kubeadm init ...
kubeadm config ...
kubeadm join ...
kubeadm token ...
kubeadm reset ...

kubectl是命令行工具,通過kubectl可以部署和管理應用,查看各種資源,創建,刪除和更新組件。
kubectl get nodes
kubectl get pod --all-namespaces #獲取所有的pod
kubectl apply ...
kubectl drain ...
kubectl delete ...


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM