使用kubeadm的方式部署v1.17.0版本k8s


截止目前(2020年1月2日),最新的kubernetes版本是v1.17,那我就整理一個最新版本的安裝文檔,方便日后查閱。

官方提供的三種Kubernetes部署方式:

1、minikube   Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,嘗試Kubernetes或日常開發的用戶使用。不能用於生產環境。官方地址:https://kubernetes.io/docs/setup/minikube/

2、kubeadm   Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集群。官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3、二進制包   從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集群。

 

kubeadm的安裝文檔

沒有特殊說明的情況下,以下操作默認在所有主機執行。

准備工作

  1. 配置ip、dns、hostname、hosts文件
  2. 關閉防火牆、selinux、swap分區
  3. 安裝依賴包
  4. 同步時間
  5. 內核參數優化

環境信息

操作系統:CentOS Linux release 7.6.1810 (Core)

docker:19.03.5

kubernetes:v1.17.0

主機名和ip:

hostname ip
master01
192.168.1.230
node01
192.168.1.241
node02
192.168.1.242

同步所有主機的hosts文件:

cat <<EOF >>/etc/hosts

192.168.1.230 master01
192.168.1.241 node01
192.168.1.242 node02

EOF

 關閉 防火牆&selinux&swap

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

安裝依賴包

在每台機器上安裝依賴包:

yum install -y epel-release
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

同步時間

在每台機器上執行同步時間:

ntpdate time1.aliyun.com

加載內核模塊

modprobe ip_vs_rr
modprobe br_netfilter

優化內核參數

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它
vm.overcommit_memory=1 # 不檢查物理內存是否夠用
vm.panic_on_oom=0 # 開啟 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

 

安裝kubernetes和docker

  1. 安裝k8s和docker
  2. 所有節點添加k8s和docker的yum源
  3. yum安裝docker,啟動docker
  4. yum安裝kubeadm,kubelet和kubectl
  5. 部署主節點,部署網絡插件,工作節點注冊到主節點

在每台機器上都需要操作

添加kubernetes的yum源

cat <<EOF >>/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
EOF

添加docker的yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安裝docker

yum -y install docker-ce

啟動docker

systemctl enable docker
systemctl start docker

cat <<EOF >>/etc/docker/daemon.json
{
"registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker

安裝kubeadm,kubelet和kubectl(kubectl可以不必在所有節點上安裝)

yum -y install kubelet kubeadm kubectl
systemctl enable kubelet

部署Kubernetes Master

此操作在master節點執行(注意ip請更換成自己環境中的主節點ip;這步等的時間長一些,我用了10分鍾)

kubeadm init --apiserver-advertise-address=192.168.1.230 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

 輸出內容:

[root@master01 ~]# kubeadm init --apiserver-advertise-address=192.168.1.230 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
W0102 10:56:48.971892   16147 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0102 10:56:48.972021   16147 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.230]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.1.230 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.1.230 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0102 11:05:25.166035   16147 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0102 11:05:25.167026   16147 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.502566 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: h21v01.ca56fof5m8myjy3e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.230:6443 --token h21v01.ca56fof5m8myjy3e \
    --discovery-token-ca-cert-hash sha256:4596521eed7d2daf11832be58b03bee46b9c248829ce31886d40fe2e997b1919 

根據輸出的提示,還需要做以下幾個動作:

1、開始使用集群前,需要在主節點上執行(這步是配置kubectl):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2、還需要部署一個pod 網絡,我們選擇flannel

安裝網絡插件

一般的網絡無法訪問quay.io,可以曲線救國,找國內的鏡像源,或者從docker hub上拉取flannel的鏡像,此處選擇第2種方式。

手動拉取flannel鏡像

在集群的所有機器上操作

# 手動拉取flannel的docker鏡像
docker pull easzlab/flannel:v0.11.0-amd64
# 修改鏡像名稱
docker tag easzlab/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

下載並安裝flannel資源配置清單(此操作在master節點上進行

[root@master01 ~]# wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master01 ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

3、工作節點需要注冊到master

node節點加入集群

使用kubeadm join 注冊Node節點到Matser

(kubeadm join 的內容,在上面kubeadm init (kubeadm init輸出結果的最后寫明了) 已經生成好了)

此操作在node節點上進行操作:

kubeadm join 192.168.1.230:6443 --token h21v01.ca56fof5m8myjy3e \
--discovery-token-ca-cert-hash sha256:4596521eed7d2daf11832be58b03bee46b9c248829ce31886d40fe2e997b1919

查看集群的node狀態,安裝完網絡工具之后,只有顯示如下狀態,所有節點全部都Ready好了之后才能繼續后面的操作

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
master01   Ready    master   10m     v1.17.0
node01     Ready    <none>   4m44s   v1.17.0
node02     Ready    <none>   4m41s   v1.17.0
[root@master01 ~]# kubectl get pods -n kube-system
NAME                               READY   STATUS              RESTARTS   AGE
coredns-9d85f5447-279k7            1/1     Running             0          10m
coredns-9d85f5447-lz8d8            0/1     ContainerCreating   0          10m
etcd-master01                      1/1     Running             0          10m
kube-apiserver-master01            1/1     Running             0          10m
kube-controller-manager-master01   1/1     Running             0          10m
kube-flannel-ds-amd64-5f769        1/1     Running             0          36s
kube-flannel-ds-amd64-gl5lm        1/1     Running             0          36s
kube-flannel-ds-amd64-ttbdk        1/1     Running             0          36s
kube-proxy-tgs9j                   1/1     Running             0          5m11s
kube-proxy-vpgng                   1/1     Running             0          10m
kube-proxy-wthxn                   1/1     Running             0          5m8s
kube-scheduler-master01            1/1     Running             0          10m

至此使用kubeadm的方式安裝k8s v1.17完畢

 

測試一下kubernetes集群

##創建一個鏡像為nginx的容器
[root@master01 ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created ##查看pod的詳細信息,events部分可以看到創建過程
[root@master01 ~]# kubectl describe pod nginx-86c57db685-9xbn6 Name: nginx-86c57db685-9xbn6 Namespace: default Priority: 0 Node: node02/192.168.1.242 Start Time: Thu, 02 Jan 2020 11:49:52 +0800 Labels: app=nginx pod-template-hash=86c57db685 Annotations: <none> Status: Running IP: 10.244.2.2 IPs: IP: 10.244.2.2 Controlled By: ReplicaSet/nginx-86c57db685 Containers: nginx: Container ID: docker://baca9e4f096278fbe8851dcb2eed794aefdcebaa70509d38df1728c409e73cdb Image: nginx Image ID: docker-pullable://nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2 Port: <none> Host Port: <none> State: Running Started: Thu, 02 Jan 2020 11:51:49 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4ghv8 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4ghv8: Type: Secret (a volume populated by a Secret) SecretName: default-token-4ghv8 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m43s default-scheduler Successfully assigned default/nginx-86c57db685-9xbn6 to node02 Normal Pulling 3m42s kubelet, node02 Pulling image "nginx" Normal Pulled 106s kubelet, node02 Successfully pulled image "nginx" Normal Created 106s kubelet, node02 Created container nginx Normal Started 106s kubelet, node02 Started container nginx
##查看pod的ip [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-86c57db685-9xbn6 1/1 Running 0 2m18s 10.244.2.2 node02 <none> <none> ##訪問nginx
[root@master01 ~]# curl 10.244.2.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

 

補充內容:

1、kubectl命令自動補全

##安裝包
yum install -y bash-completion*
##手工執行
source <(kubectl completion bash)
##寫入環境變量
echo "source <(kubectl completion bash)" >> ~/.bashrc
##需要手工執行一下,否則tab補全時會提示
“-bash: _get_comp_words_by_ref: command not found ”
sh /usr/share/bash-completion/bash_completion
##加載環境變量
source
/etc/profile
##再次使用kubectl命令進行tab補全就ok了

2、后續有nodes節點想加入集群的話,由於默認token的有效期為24小時,當過期之后,該token就不可用了,解決方法如下:

重新生成新的token ==> kubeadm token create

# 1.查看當前的token列表
[root@K8S00 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
7mjtn4.9kds6sabcouxaugd   23h         2019-12-24T15:44:58+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 2.重新生成新的token
[root@K8S00 ~]# kubeadm token create
369tcl.oe4punpoj9gaijh7

# 3.再次查看當前的token列表
[root@K8S00 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
369tcl.oe4punpoj9gaijh7   23h         2019-12-24T16:05:18+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
7mjtn4.9kds6sabcouxaugd   23h         2019-12-24T15:44:58+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 4.獲取ca證書sha256編碼hash值
[root@K8S00 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
7ae10591aa593c2c36fb965d58964a84561e9ccd416ffe7432550a0d0b7e4f90

# 5.節點加入集群
[root@k8s-node03 ~]# kubeadm join --token 369tcl.oe4punpoj9gaijh7(新的token) --discovery-token-ca-cert-hash sha256:7ae10591aa593c2c36fb965d58964a84561e9ccd416ffe7432550a0d0b7e4f90(ca證書sha256編碼hash值) 172.22.34.31:6443 --skip-preflight-chec

 

參考文章:

https://www.cnblogs.com/ElegantSmile/p/12088520.html

https://blog.csdn.net/tangwei0928/article/details/93377100


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM