K8S系列-1.離線部署K8S集群
新版KubeSphere使用kubekey工具kk一鍵部署K8S集群
主機規划
內網IP | 虛擬機實例名 | 角色 | Hostname | 配置 | OS |
---|---|---|---|---|---|
192.168.56.108 | node01-master | etcd、master、worker、docker registry | node1 | CPU: 2 core,mem: 2G,HDD:100G | CentOS 7.6 |
192.168.56.109 | node01-worker | worker | node2 | CPU: 2 core,mem: 2G,HDD:8G | CentOS 7.6 |
192.168.56.110 | K8S-node02 | worker | Node3 | CPU: 2 core,mem: 2G,HDD:8G | CentOS 7.6 |
注:使用virtualbox的HostOnly組建192.168.56.0作為三台虛擬機實例相互訪問的子網網段;
部署前准備
1.更換軟件源
切換CentOS YUM源為阿里雲yum源
# 部署wget
yum install wget -y
# 備份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 獲取阿里雲yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 獲取阿里雲epel源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# 清理緩存並創建新的緩存
yum clean all && yum makecache
2.時間同步
進行並確認時間同步成功
timedatectl
timedatectl set-ntp true
3.部署 Docker

KubeSphere離線安裝包中默認安裝的K8S版本為1.17.9,對應的docker版本為:19.03
為了避免不必要的版本問題,首先需要在三台機器上部署 Docker,我這里使用aliyun源來在線安裝,部署的是 docker-ce-19.03.4
# 部署 Docker CE
# 設置倉庫
# 部署所需包
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
# 新增 Docker 倉庫,速度慢的可以換阿里雲的源。
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# 阿里雲源地址
# 新增 Docker 倉庫,速度慢的可以換阿里雲的源。
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 部署 Docker CE.
yum install -y containerd.io-1.2.10 \
docker-ce-19.03.4 \
docker-ce-cli-19.03.4
# 啟動 Docker 並添加開機啟動
systemctl start docker
systemctl enable docker
此時三台虛擬機實例需要打開VirtualBox中的網絡適配器adapter1中的NATnetwork選項(),虛擬機實例中對應enp0s3網絡端口
開始部署K8S
1. 創建集群配置文件
[root@localhost ~]# tar xzf kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz
[root@localhost ~]# cd kubesphere-all-v3.0.0-offline-linux-amd64
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create config
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ll
total 55156
drwxr-xr-x. 5 root root 76 Sep 21 05:36 charts
-rw-r--r--. 1 root root 759 Sep 26 09:10 config-sample.yaml
drwxr-xr-x. 2 root root 116 Sep 21 06:01 dependencies
-rwxr-xr-x. 1 root root 56469720 Sep 21 01:54 kk
drwxr-xr-x. 6 root root 68 Sep 3 01:45 kubekey
drwxr-xr-x. 2 root root 4096 Sep 21 06:54 kubesphere-images-v3.0.0
2.修改配置文件
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.56.108, internalAddress: 192.168.56.108, user: root, password: kkroot}
- {name: node2, address: 192.168.56.109, internalAddress: 192.168.56.109, user: root, password: kkroot}
- {name: node3, address: 192.168.56.110, internalAddress: 192.168.56.110, user: root, password: kkroot}
roleGroups:
etcd:
- node1
master:
- node1
worker:
- node1
- node2
- node3
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
privateRegistry: dockerhub.kubekey.local
addons: []
修改node1、node2、node3節點主機IP相關配置,registry添加privateRegistry: dockerhub.kubekey.local
3.檢查依賴
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/
INFO[07:23:15 EDT] Init operating system
INFO[07:19:58 EDT] Start initializing node2 [192.168.56.109] node=192.168.56.109
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.109:/tmp Done
INFO[07:21:12 EDT] Complete initialization node2 [192.168.56.109] node=192.168.56.109
INFO[07:23:20 EDT] Start initializing node3 [192.168.56.110] node=192.168.56.110
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 192.168.56.110:/tmp Done
INFO[07:24:27 EDT] Complete initialization node3 [192.168.56.110] node=192.168.56.110
INFO[07:24:27 EDT] Init operating system successful.
4.創建鏡像倉庫
使用kk創建自簽名鏡像倉庫,執行如下命令:
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
INFO[07:26:32 EDT] Init operating system
Local images repository created successfully. Address: dockerhub.kubekey.local
INFO[07:27:03 EDT] Init operating system successful.
如果沒有反應
[root@localhost kubesphere-images-v3.0.0]# docker load < registry.tar
3e207b409db3: Loading layer 5.879MB/5.879MB
f5b9430e0e42: Loading layer 817.2kB/817.2kB
239a096513b5: Loading layer 20.08MB/20.08MB
a5f27630cdd9: Loading layer 3.584kB/3.584kB
b3f465d7c4d1: Loading layer 2.048kB/2.048kB
Loaded image: registry:2
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# systemctl start docker
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
INFO[10:45:37 EDT] Init operating system
Local images repository created successfully. Address: dockerhub.kubekey.local
INFO[10:45:39 EDT] Init operating system successful.
注:建議為docker鏡像倉庫掛載單獨的存儲節點,默認位置在/mnt/registry ,具體可參考:
VirtualBox虛擬Centos磁盤文件擴容中的添加並掛載虛擬硬盤章節
5.加載、上傳鏡像
使用push-images.sh將鏡像導入之前准備好鏡像倉庫中:
./push-images.sh dockerhub.kubekey.local
腳本會獲取必要的鏡像並重新上傳到私有registry倉庫 dockerhub.kubekey.local
[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt
{"repositories":["calico/cni","calico/kube-controllers","calico/node","calico/pod2daemon-flexvol","coredns/coredns","csiplugin/csi-attacher","csiplugin/csi-neonsan","csiplugin/csi-neonsan-centos","csiplugin/csi-neonsan-ubuntu","csiplugin/csi-node-driver-registrar","csiplugin/csi-provisioner","csiplugin/csi-qingcloud","csiplugin/csi-resizer","csiplugin/csi-snapshotter","csiplugin/snapshot-controller","fluent/fluentd","istio/citadel","istio/galley","istio/kubectl","istio/mixer","istio/pilot","istio/proxyv2","istio/sidecar_injector","jaegertracing/jaeger-agent","jaegertracing/jaeger-collector","jaegertracing/jaeger-es-index-cleaner","jaegertracing/jaeger-operator","jaegertracing/jaeger-query","jenkins/jenkins","jenkins/jnlp-slave","jimmidyson/configmap-reload","joosthofman/wget","kubesphere/alert-adapter","kubesphere/alerting","kubesphere/alerting-dbinit","kubesphere/builder-base","kubesphere/builder-go","kubesphere/builder-maven","kubesphere/builder-nodejs","kubesphere/elasticsearch-oss","kubesphere/etcd","kubesphere/examples-bookinfo-details-v1","kubesphere/examples-bookinfo-productpage-v1","kubesphere/examples-bookinfo-ratings-v1","kubesphere/examples-bookinfo-reviews-v1","kubesphere/examples-bookinfo-reviews-v2","kubesphere/examples-bookinfo-reviews-v3","kubesphere/fluent-bit","kubesphere/fluentbit-operator","kubesphere/java-11-centos7","kubesphere/java-11-runtime","kubesphere/java-8-centos7","kubesphere/java-8-runtime","kubesphere/jenkins-uc","kubesphere/k8s-dns-node-cache","kubesphere/ks-apiserver","kubesphere/ks-console","kubesphere/ks-controller-manager","kubesphere/ks-devops","kubesphere/ks-installer","kubesphere/ks-upgrade","kubesphere/kube-apiserver","kubesphere/kube-auditing-operator","kubesphere/kube-auditing-webhook","kubesphere/kube-controller-manager","kubesphere/kube-events-exporter","kubesphere/kube-events-operator","kubesphere/kube-events-ruler","kubesphere/kube-proxy","kubesphere/kube-rbac-proxy","kubesphere/kube-scheduler","kubesphere/kube-state-metrics","kubesphere/kubectl","kubesphere/linux-utils","kubesphere/log-sidecar-injector","kubesphere/metrics-server","kubesphere/netshoot","kubesphere/nfs-client-provisioner","kubesphere/nginx-ingress-controller","kubesphere/node-disk-manager","kubesphere/node-disk-operator","kubesphere/node-exporter","kubesphere/nodejs-4-centos7","kubesphere/nodejs-6-centos7","kubesphere/nodejs-8-centos7","kubesphere/notification","kubesphere/notification-manager","kubesphere/notification-manager-operator","kubesphere/pause","kubesphere/prometheus-config-reloader","kubesphere/prometheus-operator","kubesphere/provisioner-localpv","kubesphere/python-27-centos7","kubesphere/python-34-centos7","kubesphere/python-35-centos7","kubesphere/python-36-centos7","kubesphere/s2i-binary","kubesphere/s2ioperator","kubesphere/s2irun","kubesphere/tomcat85-java11-centos7"]}
以上准備工作完成且再次檢查配置文件無誤后,執行部署。
執行部署
注:確認虛擬機的NATnetwork處於關閉狀態,即虛擬機實例中的enp0s3節點不存在,這點比較重要,否則后續再安裝網絡插件calico的時候,很可能綁定上了一個不靠譜的IP地址。
1.查看節點依賴情況
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y | y | y | y | | y | | y | | | | EDT 10:21:39 |
| node1 | y | y | y | y | | y | | y | | | | EDT 10:21:40 |
| node2 | y | y | y | y | | y | | y | | | | EDT 10:21:39 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
2.補全缺少的依賴
cd /root/kubesphere-all-v3.0.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y socat-1.7.3.2-2.el7.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y conntrack-tools-1.4.4-7.el7.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y nfs-utils-1.3.0-0.66.el7_8.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y ceph-common-10.2.5-4.el7.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y glusterfs-client-xlators-6.0-29.el7.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y glusterfs-6.0-29.el7.x86_64.rpm
[root@localhost centos-7-amd64-rpms]#
yum localinstall -y glusterfs-fuse-6.0-29.el7.x86_64.rpm
3.再次執行部署
[root@node1 kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
| node1 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
| node2 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[10:53:49 EDT] Downloading Installation Files
INFO[10:53:49 EDT] Downloading kubeadm ...
INFO[10:53:49 EDT] Downloading kubelet ...
INFO[10:53:50 EDT] Downloading kubectl ...
INFO[10:53:50 EDT] Downloading kubecni ...
INFO[10:53:50 EDT] Downloading helm ...
INFO[10:53:51 EDT] Configurating operating system ...
[node2 192.168.56.109] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 192.168.56.108] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 192.168.56.110] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[10:53:54 EDT] Installing docker ...
INFO[10:53:55 EDT] Start to download images on all nodes
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[10:53:59 EDT] Generating etcd certs
INFO[10:54:01 EDT] Synchronizing etcd certs
INFO[10:54:01 EDT] Creating etcd service
INFO[10:54:05 EDT] Starting etcd cluster
[node1 192.168.56.108] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[10:54:13 EDT] Refreshing etcd configuration
INFO[10:54:13 EDT] Backup etcd data regularly
INFO[10:54:14 EDT] Get cluster status
[node1 192.168.56.108] MSG:
Cluster will be created.
INFO[10:54:14 EDT] Installing kube binaries
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[10:54:32 EDT] Initializing kubernetes cluster
[node1 192.168.56.108] MSG:
W1002 10:54:33.546978 7304 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1002 10:54:33.547575 7304 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:54:33.547601 7304 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.2.15 127.0.0.1 192.168.56.108 192.168.56.109 192.168.56.110 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.078002 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.089428 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.091411 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.007113 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rajfez.t9320hox3sddbowz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
--discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
--discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
node/node1 untainted
[node1 192.168.56.108] MSG:
node/node1 labeled
[node1 192.168.56.108] MSG:
service "kube-dns" deleted
[node1 192.168.56.108] MSG:
service/coredns created
[node1 192.168.56.108] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[node1 192.168.56.108] MSG:
configmap/nodelocaldns created
[node1 192.168.56.108] MSG:
I1002 10:55:34.720063 9901 version.go:251] remote version is much newer: v1.19.2; falling back to: stable-1.17
W1002 10:55:36.884062 9901 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:36.884090 9901 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a9a0daeedbefb4b9a014f4b258b9916403f7136bea20d28ec03aa926c41fcb3e
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
W1002 10:55:37.738867 10303 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:37.738964 10303 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join lb.kubesphere.local:6443 --token 025byf.2t2mvldlr9wm1ycx --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 NotReady master,worker 34s v1.17.9 192.168.56.108 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.4
INFO[10:55:38 EDT] Deploying network plugin ...
[node1 192.168.56.108] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[10:55:40 EDT] Joining nodes to cluster
[node3 192.168.56.110] MSG:
W1002 10:55:41.544472 12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1002 10:55:43.067290 12557 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node2 192.168.56.109] MSG:
W1002 10:55:41.963749 8533 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1002 10:55:43.520053 8533 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 192.168.56.110] MSG:
node/node3 labeled
[node2 192.168.56.109] MSG:
node/node2 labeled
INFO[10:55:54 EDT] Congradulations! Installation is successful.
至此部署成功
問題處理
1 .創建集群時node節點下載鏡像時hang住
執行創建集群腳本過程中,node節點無法下載到鏡像
確認master節點上確認容器倉庫容器是否正常運行后,在子節點上測試local registry是否部署成功
[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog
curl: (7) Failed connect to dockerhub.kubekey.local:443; Connection refused
由於鏡像倉庫在master節點,故關閉master的防火牆、核對node節點的hosts配置
//修改hosts文件
192.168.56.108 dockerhub.kubekey.local
修改后重新測試連通:
[root@localhost ~]# curl -XGET https://dockerhub.kubekey.local/v2/_catalog --cacert /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt
{"repositories":[]}
返回內容為空,檢查容器卷是否正常掛載,容器卷對應的宿主機目錄空間是否足夠,如果不夠可以參考VirtualBox虛擬Centos磁盤文件擴容進行文件系統擴容。
2.CPU數量錯誤
......
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 node=192.168.56.108
WARN[10:40:18 EDT] Task failed ...
WARN[10:40:18 EDT] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, --filename string Path to a configuration file
-h, --help help for cluster
--skip-pull-images Skip pre pull images
--with-kubernetes string Specify a supported version of kubernetes
--with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, --yes Skip pre-check of the installation
Global Flags:
--debug Print detailed information (default true)
Failed to init kubernetes cluster: interrupted by error
在master上執行sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 發現虛擬機cpu數不滿足,修改vcpu數后重新創建集群即可。
3.安裝過程中TLS相關錯誤
//禁用防火牆
systemctl stop firewalld && systemctl disable firewalld
//禁用selinux,臨時修改
setenforce 0
//禁用selinux,永久修改,重啟服務器后生效
sed -i '7s/enforcing/disabled/' /etc/selinux/config
4.其它等待超時問題
在按照3中的步驟關閉了防火牆的情況下,還是出現莫名其妙的等待超時或者安裝過程卡住問題,top發現kswap進程占用了大量的CPU資源,這時候想到是相關節點內存配置只有1G,調整大小到2G之后,重新部署。