基於Container部署的k8s集群


去年12月份,當Kubernetes社區宣布1.20版本之后會逐步棄用dockershim,當時也有很多自媒體在宣傳Kubernetes棄用Docker。其實,我覺得這是一種誤導,也許僅僅是為了蹭熱度。

dockershim是Kubernetes的一個組件,其作用是為了操作Docker。Docker是在2013年面世的,而Kubernetes是在2016年,所以Docker剛開始並沒有想到編排,也不會知道會出現Kubernetes這個龐然大物(它要是知道,也不會敗的那么快...)。但是Kubernetes在創建的時候就是以Docker作為容器運行時,很多操作邏輯都是針對的Docker,隨着社區越來越健壯,為了兼容更多的容器運行時,才將Docker的相關邏輯獨立出來組成了dockershim

正因為這樣,只要Kubernetes的任何變動或者Docker的任何變動,都必須維護dockershim,這樣才能保證足夠的支持,但是通過dockershim操作Docker,其本質還是操作Docker的底層運行時Containerd,而且Containerd自身也是支持CRI(Container Runtime Interface),那為什么還要繞一層Docker呢?是不是可以直接通過CRIContainerd進行交互?這也是社區希望啟動dockershim的原因之一吧。

那什么是Containerd呢?

Containerd是從Docker中分離的一個項目,旨在為Kubernetes提供容器運行時,負責管理鏡像和容器的生命周期。不過Containerd是可以拋開Docker獨立工作的。它的特性如下:

  • 支持OCI鏡像規范,也就是runc
  • 支持OCI運行時規范
  • 支持鏡像的pull
  • 支持容器網絡管理
  • 存儲支持多租戶
  • 支持容器運行時和容器的生命周期管理
  • 支持管理網絡名稱空間

Containerd和Docker在命令使用上的一些區別主要如下:

功能 Docker Containerd
顯示本地鏡像列表 docker images crictl images
下載鏡像 docker pull crictl pull
上傳鏡像 docker push
刪除本地鏡像 docker rmi crictl rmi
查看鏡像詳情 docker inspect IMAGE-ID crictl inspecti IMAGE-ID
顯示容器列表 docker ps crictl ps
創建容器 docker create crictl create
啟動容器 docker start crictl start
停止容器 docker stop crictl stop
刪除容器 docker rm crictl rm
查看容器詳情 docker inspect crictl inspect
attach docker attach crictl attach
exec docker exec crictl exec
logs docker logs crictl logs
stats docker stats crictl stats

可以看到使用方式大同小異。

下面介紹一下使用kubeadm安裝K8S集群,並使用containerd作為容器運行時的具體安裝步驟。

環境說明

主機節點

IP地址 系統 內核
192.168.1.206 CentOS7.6 3.10
192.168.1.207 CentOS7.6 3.10

軟件說明

軟件 版本
kubernetes 1.20.5
containerd 1.4.4

環境准備

(1)在每個節點上添加 hosts 信息:

$ cat /etc/hosts

192.168.1.206 k8s-master01
192.168.1.207 k8s-node01

(2)禁用防火牆:

$ systemctl stop firewalld
$ systemctl disable firewalld

(3)禁用SELINUX:

$ setenforce 0
$ cat /etc/selinux/config
SELINUX=disabled

(4)創建/etc/sysctl.d/k8s.conf文件,添加如下內容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

(5)執行如下命令使修改生效:

$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf

(6)安裝 ipvs

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。

(7)安裝了 ipset 軟件包:

$ yum install ipset -y

為了便於查看 ipvs 的代理規則,最好安裝一下管理工具 ipvsadm:

$ yum install ipvsadm -y

(8)同步服務器時間

$ yum install chrony -y
$ systemctl enable chronyd
$ systemctl start chronyd
$ chronyc sources

(9)關閉 swap 分區:

$ swapoff -a

(10)修改/etc/fstab文件,注釋掉 SWAP 的自動掛載,使用free -m確認 swap 已經關閉。swappiness 參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0

執行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

(11)接下來可以安裝 Containerd

$ yum install -y yum-utils \
 device-mapper-persistent-data \
 lvm2
$ yum-config-manager \
 --add-repo \
 https://download.docker.com/linux/centos/docker-ce.repo
$ yum list | grep containerd

可以選擇安裝一個版本,比如我們這里安裝最新版本:

$ yum install containerd.io-1.4.4 -y

(12)創建containerd配置文件:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
# 替換配置文件
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g"  /etc/containerd/config.toml

(13)啟動Containerd:

systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd

在確保 Containerd安裝完成后,上面的相關環境配置也完成了,現在我們就可以來安裝 Kubeadm 了,我們這里是通過指定yum 源的方式來進行安裝,使用阿里雲的源進行安裝:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
 http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

然后安裝 kubeadm、kubelet、kubectl(我安裝的是最新版,有版本要求自己設定版本):

$  yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5

設置運行時:

$ crictl config runtime-endpoint /run/containerd/containerd.sock

可以看到我們這里安裝的是 v1.20.5版本,然后將 kubelet 設置成開機啟動:

$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl start kubelet

到這里為止上面所有的操作都需要在所有節點執行配置。

初始化集群

初始化Master

然后接下來在 master 節點配置 kubeadm 初始化文件,可以通過如下命令導出默認的初始化配置:

$ kubeadm config print init-defaults > kubeadm.yaml

然后根據我們自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式為 ipvs,需要注意的是由於我們使用的containerd作為運行時,所以在初始化節點的時候需要指定cgroupDriversystemd【1】

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.206
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock 
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

然后使用上面的配置文件進行初始化:

$ kubeadm init --config=kubeadm.yaml

[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 70.001862 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec

拷貝 kubeconfig 文件

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加節點

記住初始化集群上面的配置和操作要提前做好,將 master 節點上面的 $HOME/.kube/config 文件拷貝到 node 節點對應的文件中,安裝 kubeadm、kubelet、kubectl,然后執行上面初始化完成后提示的 join 命令即可:

# kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果忘記了上面的 join 命令可以使用命令kubeadm token create --print-join-command重新獲取。

執行成功后運行 get nodes 命令:

$ kubectl get no
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   29m   v1.20.5
k8s-node01   NotReady   <none>                 28m   v1.20.5

可以看到是 NotReady 狀態,這是因為還沒有安裝網絡插件,接下來安裝網絡插件,可以在文檔 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中選擇我們自己的網絡插件,這里我們安裝 calio:

$ wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

# 因為有節點是多網卡,所以需要在資源清單文件中指定內網網卡

$ vi calico.yaml

......
spec:
 containers:
 - env:
 - name: DATASTORE_TYPE
   value: kubernetes
 - name: IP_AUTODETECTION_METHOD # DaemonSet中添加該環境變量
   value: interface=eth0 # 指定內網網卡
 - name: WAIT_FOR_DATASTORE
   value: "true"
- name: CALICO_IPV4POOL_CIDR # 由於在init的時候配置的172網段,所以這里需要修改
  value: "172.16.0.0/16"

......

安裝calico網絡插件

$ kubectl apply -f calico.yaml

隔一會兒查看 Pod 運行狀態:

# kubectl get pod -n kube-system 
NAME                                      READY   STATUS              RESTARTS   AGE
calico-kube-controllers-bcc6f659f-zmw8n   0/1     ContainerCreating   0          7m58s
calico-node-c4vv7                         1/1     Running             0          7m58s
calico-node-dtw7g                         0/1     PodInitializing     0          7m58s
coredns-54d67798b7-mrj2b                  1/1     Running             0          46m
coredns-54d67798b7-p667d                  1/1     Running             0          46m
etcd-k8s-master                           1/1     Running             0          46m
kube-apiserver-k8s-master                 1/1     Running             0          46m
kube-controller-manager-k8s-master        1/1     Running             0          46m
kube-proxy-clf4s                          1/1     Running             0          45m
kube-proxy-mt7tt                          1/1     Running             0          46m
kube-scheduler-k8s-master                 1/1     Running             0          46m

網絡插件運行成功了,node 狀態也正常了:

# kubectl get nodes 
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   47m   v1.20.5
k8s-node01   Ready    <none>                 46m   v1.20.5

用同樣的方法添加另外一個節點即可。

配置命令自動補全

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

參考文檔

【1】:https://github.com/containerd/containerd/issues/4857

【2】:https://github.com/containerd/containerd

原文:https://mp.weixin.qq.com/s/Qw01WMGlZx72YieOBG6YvQ


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM