使用 kubeadm 部署 v1.18.5 版本 Kubernetes集群


說明

本文系搭建kubernetes v1.18.5 集群筆記,使用三台虛擬機作為 CentOS 測試機,安裝kubeadm、kubelet、kubectl均使用yum安裝,網絡組件選用的是 flannel

行文中難免出現錯誤,如果讀者有高見,請評論與我交流

如需轉載請注明原始出處 https://www.cnblogs.com/hellxz/p/use-kubeadm-init-kubernetes-cluster.html

環境准備

部署集群沒有特殊說明均使用root用戶執行命令

硬件信息

ip hostname mem disk explain
192.168.87.145 kube-master 4 GB 20GB k8s 控制平台節點
192.168.87.146 kube-node1 4 GB 20GB k8s 執行節點1
192.168.87.147 kube-node2 4 GB 20GB k8s 執行節點2

軟件信息

software version
CentOS CentOS Linux release 7.7.1908 (Core)
Kubernetes v1.18.5
Docker 19.03.12

保證環境正確性

purpose commands
保證集群各節點互通 ping -c 3 <ip>
保證MAC地址唯一 ip link 或 ifconfig -a
保證集群內主機名唯一 查詢 hostnamectl status,修改 hostnamectl set-hostname <hostname>
保證系統產品uuid唯一 dmidecode -s system-uuid 或 sudo cat /sys/class/dmi/id/product_uuid

修改MAC地址參考命令:

ifconfig eth0 down
ifconfig eth0 hw ether 00:0C:18:EF:FF:ED
ifconfig eth0 up

如product_uuid不唯一,請考慮重裝CentOS系統

確保端口開放正常

kube-master節點端口檢查:

Protocol Direction Port Range Purpose
TCP Inbound 6443* kube-api-server
TCP Inbound 2379-2380 etcd API
TCP Inbound 10250 Kubelet API
TCP Inbound 10251 kube-scheduler
TCP Inbound 10252 kube-controller-manager

kube-node*節點端口檢查:

Protocol Direction Port Range Purpose
TCP Inbound 10250 Kubelet API
TCP Inbound 30000-32767 NodePort Services

如果你對主機的防火牆配置不是很自信,可以關掉防火牆:

systemctl disable --now firewalld

或者 清除iptables規則 (慎用)

iptables -F

配置主機互信

分別在各節點配置hosts映射:

cat >> /etc/hosts <<EOF
192.168.87.145 kube-master
192.168.87.146 kube-node1
192.168.87.147 kube-node2
EOF

 

kube-master生成ssh密鑰,分發公鑰到各節點:

#生成ssh密鑰,直接一路回車
ssh-keygen -t rsa
#復制剛剛生成的密鑰到各節點可信列表中,需分別輸入各主機密碼
ssh-copy-id root@kube-master
ssh-copy-id root@kube-node1
ssh-copy-id root@kube-node2

 

禁用swap

swap僅當內存不夠時會使用硬盤塊充當額外內存,硬盤的io較內存差距極大,禁用swap以提高性能

各節點均需執行:

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

關閉 SELinux

關閉 SELinux,否則 kubelet 掛載目錄時可能報錯 Permission denied,可以設置為permissivedisabledpermissive 會提示warn信息

各節點均需執行:

setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

設置系統時區、同步時間

timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd

查看同步狀態:

timedatectl status

 

輸出:

System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
  • System clock synchronized: yes,表示時鍾已同步;
  • NTP service: active,表示開啟了時鍾同步服務;
# 將當前的 UTC 時間寫入硬件時鍾
timedatectl set-local-rtc 0

# 重啟依賴於系統時間的服務
systemctl restart rsyslog && systemctl restart crond

部署docker

所有節點均需安裝部署docker

添加docker yum源

#安裝必要依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
#添加aliyun docker
-ce yum源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#重建yum緩存 yum makecache fast

 

安裝docker

#查看可用docker版本
yum list docker
-ce.x86_64 --showduplicates | sort -r

#安裝指定版本docker
yum install
-y docker-ce-19.03.12-3.el7

 

這里以安裝19.03.12版本舉例,注意版本號不包含:與之前的數字

確保網絡模塊開機自動加載

lsmod | grep overlay
lsmod | grep br_netfilter

 

若上面命令無返回值輸出或提示文件不存在,需執行以下命令:

cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

 

使橋接流量對iptables可見

各節點均需執行:

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

 

驗證是否生效,均返回 1 即正確

sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables

 

配置docker

$ mkdir /etc/docker

#修改cgroup驅動為systemd[k8s官方推薦]、限制容器日志量、修改存儲類型,最后的docker家目錄可修改
$ cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://7uuu3esz.mirror.aliyuncs.com"],
  "data-root": "/data/docker"
}
EOF

#添加開機自啟,立即啟動
$ systemctl enable --now docker

 

驗證docker是否正常

#查看docker信息,判斷是否與配置一致
docker info
#hello
-docker測試 docker run --rm hello-world
#刪除測試image docker rmi hello
-world

 

添加用戶到docker組

非root用戶,無需sudo即可使用docker命令

#添加用戶到docker組
usermod -aG docker <USERNAME>

#當前會話立即更新docker組 newgrp docker

部署kubernetes集群

未特殊說明,各節點均需執行如下步驟

添加kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#重建yum緩存,輸入y添加證書認證
yum makecache fast

 

安裝kubeadm、kubelet、kubectl

各節點均需安裝kubeadm、kubelet,kubectl僅kube-master節點需安裝(作為worker節點,kubectl無法使用,可以不裝)

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable
--now kubelet

 

配置自動補全命令

#安裝bash自動補全插件
yum install bash-completion -y
#設置kubectl與kubeadm命令補全,下次login生效 kubectl completion bash
>/etc/bash_completion.d/kubectl kubeadm completion bash > /etc/bash_completion.d/kubeadm

 

預拉取kubernetes鏡像

由於國內網絡因素,kubernetes鏡像需要從mirrors站點或通過dockerhub用戶推送的鏡像拉取

#查看指定k8s版本需要哪些鏡像
kubeadm config images list --kubernetes-version v1.18.5

 

另因阿里雲的鏡像暫時還沒更新到v1.18.5版本,所以通過dockerhub上拉取,目前阿里雲最新同步版本是v1.18.3,想通過v1.18.3版本拉取鏡像請參考 <https://www.cnblogs.com/hellxz/p/13204093.html

在 /root/k8s 目錄下,新建腳本get-k8s-images.sh,內容如下:

#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by Hellxz Zhang <hellxz001@foxmail.com>

KUBE_VERSION=v1.18.5
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0

# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

 

腳本添加可執行權限,執行腳本拉取鏡像:

chmod +x get-k8s-images.sh
./get-k8s-images.sh

拉取完成,執行 docker images 查看鏡像

初始化kube-master

僅 kube-master 節點需要執行此步驟

修改kubelet配置默認cgroup driver

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

systemctl restart kubelet

 

生成kubeadm初始化配置文件 [可選] 僅當需自定義初始化配置時用

kubeadm config print init-defaults > init.default.yaml

 

測試環境是否正常(WARNING是正常的)

kubeadm init phase preflight [--config kubeadm-init.yaml]

 

上圖提示Warning是正常的,校驗不了k8s信息是因為連不上被ban的網站,最后一個提示是因我本地未關閉防火牆,請我看清楚必要放行的端口號是否暢通

初始化master 10.244.0.0/16是flannel固定使用的IP段,設置取決於網絡組件要求

kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5 [--config kubeadm-init.yaml]

輸出如下:

[root@kube-master k8s]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5
W0703 18:49:19.076654   16469 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.87.145]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.87.145 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.87.145 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0703 18:49:23.039913   16469 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0703 18:49:23.040907   16469 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.505101 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2b7cfv.6bhz4z3a3vzyg498
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.87.145:6443 --token 2b7cfv.6bhz4z3a3vzyg498 \
    --discovery-token-ca-cert-hash sha256:79bd63d82634f9953cc9d6b5a923fa87c973f0c3fd9ed7270167052dd834c026

 

為日常使用集群的用戶添加kubectl使用權限

su hellxz
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
exit

 

配置master認證

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile

 

如果不配置這個,會提示如下輸出:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

此時master節點已經初始化成功,但是還未完裝網絡組件,還無法與其他節點通訊

安裝網絡組件,以flannel為例

cd ~/k8s
yum install -y wget
#下載flannel最新配置文件 wget https:
//raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml

 

查看kube-master節點狀態

kubectl get nodes

 

如果STATUS提示NotReady,可以通過 kubectl describe node kube-master 查看具體的描述信息,性能差的服務器到達Ready狀態時間會長些

備份鏡像供其他節點使用

在kube-master節點將鏡像備份出來,便於后續傳輸給其他node節點,當然有鏡像倉庫更好

docker save k8s.gcr.io/kube-proxy:v1.18.5 \
            k8s.gcr.io/kube-apiserver:v1.18.5 \
            k8s.gcr.io/kube-controller-manager:v1.18.5 \
            k8s.gcr.io/kube-scheduler:v1.18.5 \
            k8s.gcr.io/pause:3.2 \
            k8s.gcr.io/coredns:1.6.7 \
            k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.5.tar

 

初始化kube-node*節點並加入集群

拷貝鏡像到node節點,以kube-node1舉例,node2不再累述

#此時命令在kube-node*節點上執行
mkdir ~/k8s
scp root@kube-master:/root/k8s/k8s-imagesV1.18.5.tar ~/k8s

 

獲取加入kubernetes命令,未忘可不選

剛才在初始化kube-master節點時,有在最后輸出其加入集群的命令,假如我沒記下來,那怎么辦呢?

訪問kube-master輸入創建新token命令,同時輸出加入集群的命令:

kubeadm token create --print-join-command

 

在kube-node*節點上執行加入集群命令

kubeadm join 192.168.87.145:6443 --token jdyzyq.icwlpkm36kgs6nqh     --discovery-token-ca-cert-hash sha256:24f9b05fa10307ef6fff4132e0ec3c8b54917d4ff440b36108908aca588d8be7 

 

查看集群節點狀態

kubectl get nodes

 

參考


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM