一、Kubernetes概述
1.1 Kubernetes是什么
-
Kubernetes是Google在2014年開源的一個容器集群管理系統,Kubernetes簡稱K8S。
-
K8S用於容器化應用程序的部署,擴展和管理。
-
K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。
-
Kubernetes目標是讓部署容器化應用簡單高效。
1.2 Kubernetes特性
-
自我修復
-
在節點故障時重新啟動失敗的容器,替換和重新部署,保證預期的副本數量;殺死健康檢查失敗的容器,並且在未准備好之前不會處理客戶端請求,確保線上服務不中斷。
-
彈性伸縮
-
使用命令、UI或者基於CPU使用情況自動快速擴容和縮容應用程序實例,保證應用業務高峰並發時的高可用性;業務低峰時回收資源,以最小成本運行服務。
-
自動部署和回滾
-
K8S采用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除所有Pod,如果更新過程中出現問題,將回滾更改,確保升級不受影響業務。
-
服務發現和負載均衡
-
K8S為多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),並且負載均衡關聯的所有容器,使得用戶無需考慮容器IP問題。
-
機密和配置管理
-
管理機密數據和應用程序配置,而不需要把敏感數據暴露在鏡像里,提高敏感數據安全性。並可以將一些常用的配置存儲在K8S中,方便應用程序使用。
-
存儲編排
-
掛載外部存儲系統,無論是來自本地存儲,公有雲(如AWS),還是網絡存儲(如NFS、GlusterFS、Ceph)都作為集群資源的一部分使用,極大提高存儲使用靈活性。
-
批處理
-
提供一次性任務,定時任務;滿足批量數據處理和分析的場景。
1.3 Kubernetes集群架構與組件
1.4 Kubernetes集群組件介紹
1.4.1 Master組件
-
kube-apiserver
-
Kubernetes API,
集群的統一入口,各組件協調者,以RESTful API提供接口服務,所有對象資源的增刪改查和監聽操作都交給APIServer處理后再提交給Etcd存儲。
-
kube-controller-manager
-
處理集群中常規后台任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。
-
kube-scheduler
-
根據調度算法為新創建的Pod選擇一個Node節點,可以任意部署,可以部署在同一個節點上,也可以部署在不同的節點上。
-
etcd
-
分布式鍵值存儲系統。用於保存集群狀態數據,比如Pod、Service等對象信息。
1.4.2 Node組件
-
kubelet
-
kubelet是Master在Node節點上的Agent,管理本機運行容器的生命周期,比如創建容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工作。kubelet將每個Pod轉換成一組容器。
-
kube-proxy
-
在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工作。
-
docker或rocket
-
容器引擎,運行容器。
1.5 Kubernetes 核心概念
-
Pod
-
最小部署單元
-
一組容器的集合
-
一個Pod中的容器共享網絡命名空間
-
Pod是短暫的
-
Controllers
-
ReplicaSet :確保預期的Pod副本數量
-
Deployment :無狀態應用部署
-
StatefulSet :有狀態應用部署
-
DaemonSet :確保所有Node運行同一個Pod
-
Job :一次性任務
-
Cronjob :定時任務
更高級層次對象,部署和管理Pod
-
Service
- 防止Pod失聯
- 定義一組Pod的訪問策略
-
Label :標簽,附加到某個資源上,用於關聯對象、查詢和篩選
-
Namespaces:命名空間,將對象邏輯上隔離
-
Annotations :注釋
二、kubeadm 快速部署K8S集群
2.1 kubernetes 官方提供的三種部署方式
-
minikube
Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,僅用於嘗試Kubernetes或日常開發的用戶使用。部署地址:https://kubernetes.io/docs/setup/minikube/
-
kubeadm
Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集群。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
-
二進制包
推薦,從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集群。下載地址:https://github.com/kubernetes/kubernetes/releases
2.2 安裝kubeadm環境准備
2.2.1 環境需求
環境:centos 7.4 +(版本盡量在7.3-7.5之間,盡量不要用8或者以上版本)
硬件需求:CPU>=2c ,內存>=2G
2.2.2 環境角色
IP | 角色 | 安裝軟件 |
---|---|---|
192.168.73.138 | k8s-Master | kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet |
192.168.73.139 | k8s-node01 | kubelet kube-proxy docker flannel |
192.168.73.140 | k8s-node01 | kubelet kube-proxy docker flannel |
2.2.3 環境初始化
獲得權限,輸入密碼,不然后面有很多東西報錯,無法判斷是否是由權限引起:
sudo su
PS : 以下所有操作,在三台節點全部執行
1、關閉防火牆及selinux
$ systemctl stop firewalld && systemctl disable firewalld
$ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
2、關閉 swap 分區
$ swapoff -a # 臨時
$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久
3、分別在192.168.73.138、192.168.73.139、192.168.73.140上設置主機名及配置hosts
$ hostnamectl set-hostname k8s-master(192.168.73.138主機打命令)
$ hostnamectl set-hostname k8s-node01(192.168.73.139主機打命令) $ hostnamectl set-hostname k8s-node02 (192.168.73.140主機打命令)
4、在所有主機上上添加如下命令
$ cat >> /etc/hosts << EOF 192.168.4.34 k8s-master 192.168.4.35 k8s-node01 192.168.4.36 k8s-node02 EOF
5、內核調整,將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 生效
6、設置系統時區並同步時間服務器
yum install ntpdate -y
ntpdate time.windows.com
噩夢的開始,第一個坑:如果centos版本是8的話,它是不支持安裝ntpdate的,所以就要用另一個方法:
#安裝軟件 chrony
yum install -y chrony
#安裝好工具,先啟動
systemctl start chronyd #設為系統自動啟動 systemctl enable chronyd #編輯配置文件 vim /etc/chrony.conf
/etc/chrony.conf配置文件內容:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool 2.centos.pool.ntp.org iburst (這一行注釋掉,增加以下兩行)
server ntp.aliyun.com iburst
server cn.ntp.org.cn iburst
重新加載配置:
[root@kiccleaf home]# systemctl restart chronyd.service [root@kiccleaf home]# chronyc sources -v 210 Number of sources = 2 .-- Source mode '^' = server, '=' = peer, '#' = local clock. / .- Source state '*' = current synced, '+' = combined , '-' = not combined, | / '?' = unreachable, 'x' = time may be in error, '~' = time too variable. || .- xxxx [ yyyy ] +/- zzzz || Reachability register (octal) -. | xxxx = adjusted offset, || Log2(Polling interval) --. | | yyyy = measured offset, || \ | | zzzz = estimated error. || | | \ MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 203.107.6.88 2 6 7 2 -188us[+6871us] +/- 24ms ^- 61.177.189.190 3 6 17 19 +663us[ +663us] +/- 97ms #再次查看時間 [root@kiccleaf home]# date Sat Aug 29 16:26:08 CST 2020
2.2.4 docker 安裝
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo #安裝指定版本的docker $ yum -y install docker-ce-18.06.1.ce-3.el7 # docker生效並啟動 $ systemctl enable docker && systemctl start docker # 查看docker版本 $ docker --version Docker version 18.06.1-ce, build e68fc7a
2.2.5 添加kubernetes YUM軟件源
[root@k8s-master]# cat > /etc/yum.repos.d/kubernetes.repo << EOF 輸出如下: [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2.2.6 安裝kubeadm,kubelet和kubectl
所有主機都需要操作,由於版本更新頻繁,這里指定版本號部署
#安裝指定版本 [root@k8s-master]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 #生效 [root@k8s-master]# systemctl enable kubelet
2.3 部署Kubernetes Master
只需要在Master 節點執行,這里的apiserve需要修改成自己的master地址。
[root@k8s-master ]# kubeadm init \ --apiserver-advertise-address=192.168.73.138 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.15.0 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
service-cidr 的選取不能和PodCIDR及本機網絡有重疊或者沖突。
一般可以選擇一個本機網絡和PodCIDR都沒有用到的私網地址段,比如PODCIDR使用192.168.0.1/16, 那么service cidr可以選擇172.16.0.1/20. 主機網段可以選10.1.0.1/8. 三者之間網絡無重疊沖突即可。
也就是上面的復制就行了,除了第一個ip需要改成master的主ip地址,其它復制粘貼完事。
由於默認拉取鏡像地址k8s.gcr.io國內無法訪問,這里指定阿里雲鏡像倉庫地址。
輸出結果:
[preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.4.34] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" ......(省略) [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.73.138:6443 --token 2nm5l9.jtp4zwnvce4yt4oj \ --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717d
紅色區域是重點。
根據第一個紅色代碼復制,依次進行操作:
[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后保存一下上面生成的紅色token值和ca證書,一會會用到。這里說下,默認token的有效期為24小時,當過期之后,該token就不可用了,
如果后續有nodes節點加入,解決方法如下:
重新生成新的token:
kubeadm token create [root@k8s-master ~]# kubeadm token create 0w3a92.ijgba9ia0e3scicg [root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 0w3a92.ijgba9ia0e3scicg 23h 2019-09-08T22:02:40+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token t0ehj8.k4ef3gq0icr3etl0 22h 2019-09-08T20:58:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master ~]#
獲取ca證書sha256編碼hash值:
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a
然后如果是后續加入新的節點,那么拿着新生成token和ca證書,直接命令就完事。
如果是原先已有的節點重新加入集群,跳轉到本篇文章的結尾,依次操作,
然后再使用加入命令,就完事了。
2.4 加入Kubernetes Node
在兩個 Node 節點執行
使用kubeadm join 注冊Node節點到Matser
kubeadm join 的內容,在上面kubeadm init 已經生成好了
[root@k8s-node01 ~]# kubeadm join 192.168.4.34:6443 --token 2nm5l9.jtp4zwnvce4yt4oj \ --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89
輸出內容如下:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2.5 安裝網絡插件
只需要在Master 節點執行,下面是坑,不生效,后面會說另一個方法,先別學。修改
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
修改鏡像地址:(有可能默認不能拉取,確保能夠訪問到quay.io這個registery,否則修改如下內容)
[root@k8s-master ~]# vim kube-flannel.yml
進入編輯,把
106行,120行的內容,替換如下image,替換之后查看如下為正確
[root@k8s-master ~]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64 106 image: lizhenliang/flannel:v0.11.0-amd64 120 image: lizhenliang/flannel:v0.11.0-amd64 [root@k8s-master ~]# kubectl apply -f kube-flannel.yml [root@k8s-master ~]# ps -ef|grep flannel root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
上面的幾點操作就是新的神坑出現,地址根本訪問不到,也就拿不到kube-flannel.yml,玩你打野,采用硬核辦法。
方法①:去https://github.com/flannel-io/flannel/releases手動下載flannel,選擇12版本的,flanneld-v0.12.0-amd64.docker,
拉到linux中,執行命令
docker load < flanneld-v0.12.0-amd64.docker
查看已安裝的鏡像,會發現已經有了 flanneld
docker images
再執行命令:
kubectl apply -f kube-flannel.yml
方法②:自行創建kube-flannel.yml文件
填寫如下內容:
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
然后執行命令:
kubectl apply -f kube-flannel.yml
我親測是用第二種方法成功的,復制粘貼。干就完了。
查看集群的node狀態,安裝完網絡工具之后,只有顯示如下狀態,所有節點全部都Ready好了之后才能繼續后面的操作:
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.15.0 k8s-node01 Ready <none> 5m22s v1.15.0 k8s-node02 Ready <none> 5m18s v1.15.0 [root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-bccdc95cf-h2ngj 1/1 Running 0 14m coredns-bccdc95cf-m78lt 1/1 Running 0 14m etcd-k8s-master 1/1 Running 0 13m kube-apiserver-k8s-master 1/1 Running 0 13m kube-controller-manager-k8s-master 1/1 Running 0 13m kube-flannel-ds-amd64-j774f 1/1 Running 0 9m48s kube-flannel-ds-amd64-t8785 1/1 Running 0 9m48s kube-flannel-ds-amd64-wgbtz 1/1 Running 0 9m48s kube-proxy-ddzdx 1/1 Running 0 14m kube-proxy-nwhzt 1/1 Running 0 14m kube-proxy-p64rw 1/1 Running 0 13m kube-scheduler-k8s-master 1/1 Running 0 13m
只有全部都為1/1則可以成功執行后續步驟,如果flannel需檢查網絡情況,重新進行如下操作
kubectl delete -f kube-flannel.yml
然后重新wget,然后修改鏡像地址,然后
kubectl apply -f kube-flannel.yml
神坑繼續出現,如果上面
flannel不搞定,那么的話,
kubectl get nodes
看到的節點全部都會是NotReady狀態,而且
kubectl get pod -n kube-system
看到的pod節點有以下兩個,
coredns-bccdc95cf-h2ngj coredns-bccdc95cf-m78lt
一直處於 pending狀態,所以必須先搞定flannel。
其實coredns處於不正常狀態(比如pending,CrashLoopBackOff狀態以及Error狀態)的原因有很多:
1、k8s集群中的網絡插件沒有安裝(比如剛初始化完k8s的集群就查看集群節點狀態,則coredns相關的pod可能處於pending狀態,即集群網絡不可用,需要安裝flannel或者calico網絡插件);
2、多網絡插件沖突(比如同時使用calico與flannel網絡插件,也會導致coredns處於pending狀態);
3、啟動pod的相關配置文件沒有配置成功(比如有的配置文件中要添加command: [ “/bin/bash”, “-ce”, “tail -f /dev/null” ],使得與該pod相關的進程能夠在后台持續運行)。
除此之外,pod處於ImagePullBackOff狀態的一般是因為鏡像拉取不成功,注意使用kubectl describe查看pod的日志信息時,觀察具體是k8s集群中哪個節點沒有成功拉取鏡像。
解決方案:
#清除所有的iptables規則
iptables --flush
#清除NAT這一內建表中的規則
iptables -tnat --flush
如果status是init或者imagePullBackoff,可以稍等一下,意思是在初始化,在拉鏡像。直到全部running的時候,就可以起飛了。

2.7 測試Kubernetes集群
在Kubernetes集群中創建一個pod,然后暴露端口,驗證是否正常訪問:
#創建nginx容器
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created #暴露對外端口 [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed #查看nginx是否運行成功 [root@k8s-master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 39m service/nginx NodePort 10.1.224.251 <none> 80:31745/TCP 9
在瀏覽器訪問。三個結點都可訪問,說明集群已經搭建完成,
訪問地址:http://NodeIP:Port ,此例就是:http://192.168.73.138:32039
擴容nginx副本wei3個,成功
kubectl scale deployment nginx --replicas=3
kubectl get pods
2.8 部署Dashboard Web頁面,可視化查看Kubernetes資源
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml [root@k8s-master ~]# vim kubernetes-dashboard.yaml 修改內容: 109 spec: 110 containers: 111 - name: kubernetes-dashboard 112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行 ...... 157 spec: 158 type: NodePort # 增加此行 159 ports: 160 - port: 443 161 targetPort: 8443 162 nodePort: 30001 # 增加此行 163 selector: 164 k8s-app: kubernetes-dashboard
esc退出vim編輯器,:wq保存
#先Docker拉去鏡像 docker pull lizhenliang/kubernetes-dashboard-amd64:v1.10.1
#執行kubernetes-dashboard.yaml 文件 kubectl apply -f kubernetes-dashboard.yaml
#查看暴露的端口 kubectl get pods,svc -n kube-system
訪問 Dashboard的web界面
訪問地址:https://NodeIP:30001 【必須是https】
創建service account並綁定默認cluster-admin管理員集群角色:【依次執行】
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}‘)
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-d9jh2 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 4aa1906e-17aa-4880-b848-8b3959483323 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJ...(省略如下)...AJdQ token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDlqaDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGFhMTkwNmUtMTdhYS00ODgwLWI4NDgtOGIzOTU5NDgzMzIzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.OkF6h7tVQqmNJniCHJhY02G6u6dRg0V8PTiF8xvMuJJUphLyWlWctgmplM4kjKVZo0fZkAthL7WAV5p_AwAuj4LMfo1X5IpxUomp4YZyhqgsBM0A2ksWoKoLDjbizFwOty8TylWlsX1xcJXZjmP9OvNgjjSq5J90N5PnxYIIgwAMP3fawTP7kUXxz5WhJo-ogCijJCFyYBHoqHrgAbk9pusI8DpGTNIZxBMxkwPPwFwzNCOfKhD0c8HjhNeliKsOYLryZObRdmTQXmxsDfxynTKsRxv_EPQb99yW9GXJPQL0OwpYb4b164CFv857ENitvvKEOU6y55P9hFkuQuAJdQ
已經部署完成。
解決其他瀏覽器不能訪問的問題
[root@k8s-master ~]# cd /etc/kubernetes/pki/ [root@k8s-master pki]# mkdir ui [root@k8s-master pki]# cp apiserver.crt ui/ [root@k8s-master pki]# cp apiserver.key ui/ [root@k8s-master pki]# cd ui/ [root@k8s-master ui]# mv apiserver.crt dashboard.pem [root@k8s-master ui]# mv apiserver.key dashboard-key.pem [root@k8s-master ui]# kubectl delete secret kubernetes-dashboard-certs -n kube-system [root@k8s-master ui]# kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
[root@k8s-master]# vim kubernetes-dashboard.yaml #回到這個yaml的路徑下修改 修改 dashboard-controller.yaml 文件,在args下面增加證書兩行 - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem [root@k8s-master ~]kubectl apply -f kubernetes-dashboard.yaml [root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-zbn9f Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 40259d83-3b4f-4acc-a4fb-43018de7fc19 Type: kubernetes.io/service-account-token Data ==== namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temJuOWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDAyNTlkODMtM2I0Zi00YWNjLWE0ZmItNDMwMThkZTdmYzE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E0hGAkeQxd6K-YpPgJmNTv7Sn_P_nzhgCnYXGc9AeXd9k9qAcO97vBeOV-pH518YbjrOAx_D6CKIyP07aCi_3NoPlbbyHtcpRKFl-lWDPdg8wpcIefcpbtS6uCOrpaJdCJjWFcAEHdvcfmiFpdVVT7tUZ2-eHpRTUQ5MDPF-c2IOa9_FC9V3bf6XW6MSCZ_7-fOF4MnfYRa8ucltEIhIhCAeDyxlopSaA5oEbopjaNiVeJUGrKBll8Edatc7-wauUIJXAN-dZRD0xTULPNJ1BsBthGQLyFe8OpL5n_oiHM40tISJYU_uQRlMP83SfkOpbiOpzuDT59BBJB57OQtl3w ca.crt: 1025 bytes
坑1記錄:
當K8S配置過程中出現出現以下錯誤:
使用以下命令,將這個文件刪除
rm -rf /var/lib/etcd
坑2記錄:使用kubectl 命令是報錯
[root@k8s-master ~]# kubectl get pod The connection to the server localhost:8080 was refused - did you specify the right host or port?
原因,根據反饋得知找不到/etc/kubernetes/admin.conf
這個文件。雖然不知道為啥這個文件莫名其妙消失了,但是無妨,見招拆招吧直接。
具體根據情況,此處記錄linux設置該環境變量
#方式一:編輯文件設置
vim /etc/profile
在底部增加新的環境變量 export KUBECONFIG=/etc/kubernetes/admin.conf
#方式二:直接追加文件內容
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
#生效:
source /etc/profile
坑3記錄:k8s + Dashboard 部署成功后一直訪問不了
執行這個命令
iptables -P FORWARD ACCEPT
第一次安裝部署集群成功后,后面從xshell從linux斷開后重新開啟k8s步驟流程:
#獲得權限 sudo su
主節點:
#主節點初始化k8s
kubeadm init \
--apiserver-advertise-address=172.16.8.96 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
會報錯,執行以下命令:
rm -rf /var/lib/etcd
成功初始化后,執行以下程序腳本:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
enter回車鍵 確認覆蓋。
執行:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
#生效:
source /etc/profile
🆗,這樣就是重新初始化成功了,再把別的節點加入進來就ok了。
然后kubectl apply -f kube-flannel.yml
等一會,等待5分鍾,flannel的兩個pod狀態也是running了,就完事了。
子節點重新加入流程:
#獲得權限 sudo su
1、刪除node節點
執行kubectl delete node node01(應該是在主節點執行,沒測node01應該是一開始set-hostname設置的名字)
2、這時如果直接執行加入,會報錯。如下:
[root@k8s-node02 pki]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09 error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists [ERROR Port-10250]: Port 10250 is in use [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解決方案:
根據報錯可以看到端口被占用,配置文件可ca證書已經生成,所以需要刪除這些配置文件和證書,並且kill掉占用的端口。建議刪除之前先備份。
[root@k8s-node02 pki]# lsof -i:10250
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kubelet 694 root 30u IPv6 26021 0t0 TCP *:10250 (LISTEN) [root@k8s-node02 pki]# kill -9 694 [root@k8s-node02 pki]# cd /etc/kubernetes/ [root@k8s-node02 kubernetes]# ls bootstrap-kubelet.conf kubelet.conf manifests pki [root@k8s-node02 kubernetes]# mv bootstrap-kubelet.conf bootstrap-kubelet.conf_bk [root@k8s-node02 kubernetes]# mv kubelet.conf kubelet.conf_bk [root@k8s-node02 kubernetes]# cd pki/ [root@k8s-node02 pki]# ls ca.crt [root@k8s-node02 pki]# rm -rf ca.crt
3、再次執行加入,又出現報錯。
[root@k8s-node02 ~]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
解決方案:
執行kubeadm reset子節點重置
[root@k8s-node02 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0710 10:22:57.487306 31093 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
4、最后執行加入,問題解決。
[root@k8s-node02 ~]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
5、查看master節點,加入成功。
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 120m v1.15.1 k8s-node01 Ready <none> 100m v1.15.1 k8s-node02 Ready <none> 83m v1.15.1