kubeadm是kubernetes官方用來自動化高效安裝kubernetes的工具,手動安裝,特別麻煩。
使用kubeadm安裝無疑是一種不錯的選擇。
1、環境准備
1.1系統配置
系統是CentOS Linux release 7.5
[root@k8s-master ~]# tail -3 /etc/hosts 10.0.0.11 k8s-master 10.0.0.12 k8s-node1
10.0.0.13 k8s-node3
禁用防火牆和selinux
添加內核參數文件 /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
vm.swappiness = 0
執行命令使修改生效
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
1.2安裝Docker(所有節點)
yum install -y yum-utils device-mapper-persistent-data lvm2
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker.repo
查看可下載的版本
[root@k8s-master ~]# yum list docker-ce.x86_64 --showduplicates |sort -r Repository base is listed more than once in the configuration Repository updates is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration * updates: mirrors.nju.edu.cn Loading mirror speeds from cached hostfile Loaded plugins: fastestmirror, langpacks * extras: mirrors.njupt.edu.cn * epel: mirror01.idc.hinet.net docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
# yum makecache fast
# yum install docker-ce -y
[root@k8s-node1 ~]# docker -v Docker version 18.06.1-ce, build e68fc7a
[root@k8s-node1 ~]# systemctl start docker ;systemctl enable docker
2.使用kubeadm部署Kubernetes
2.1安裝kubelet 和 kubeadm
生成kubernetes的yum倉庫配置文件/etc/yum.repos.d/kubernetes.repo,內容如下:
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1
yum makecache fast yum install -y kubelet kubeadm kubectl
關閉swap功能
[root@k8s-node1 yum.repos.d]# swapoff -a [root@k8s-node1 yum.repos.d]# sysctl -p /etc/sysctl.d/k8s.conf
注釋掉 、/etc/fstab 中swap的條目

mount -a
echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet
2.2 使用kubeadm init初始化集群
在各節點開機啟動kubelet服務:
systemctl enable kubelet.service
使用kubeadm初始化集群,會發生一下錯誤,這是由於初始化時,先從本地查找 有沒有kubenetes組件的相關鏡像如果找不到就從谷歌鏡像站下載,如果你不翻牆就只能讓本地存在這些鏡像。
我們可以從docker鏡像站下載kubernetes相關組件的鏡像然后給他重新打tag
[root@k8s-master yum.repos.d]# kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.11
[init] using Kubernetes version: v1.12.1
[preflight] running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: EOF
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
執行腳本
[root@k8s-master ~]# cat k8s.sh docker pull mirrorgooglecontainers/kube-apiserver:v1.12.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.12.1 docker pull mirrorgooglecontainers/kube-proxy:v1.12.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.2 docker tag mirrorgooglecontainers/kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1 docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.12.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.2
bash k8s.sh
[root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.1 61afff57f010 11 days ago 96.6MB k8s.gcr.io/kube-apiserver v1.12.1 dcb029b5e3ad 11 days ago 194MB k8s.gcr.io/kube-controller-manager v1.12.1 aa2dd57c7329 11 days ago 164MB k8s.gcr.io/kube-scheduler v1.12.1 d773ad20fd80 11 days ago 58.3MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 weeks ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 weeks ago 39.2MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742kB
具體操作如下:
查看kubernetes的版本
[root@k8s-master yum.repos.d]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
再次在master節點上執行初始化
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.11
輸出信息如下:
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.11 [init] using Kubernetes version: v1.12.1 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Using the existing apiserver-etcd-client certificate and key. [certificates] Using the existing etcd/server certificate and key. [certificates] Using the existing etcd/healthcheck-client certificate and key. [certificates] Using the existing etcd/peer certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Using the existing sa key. [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 18.503179 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] using token: o3ha14.vdjveh35zz0coqzz [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.0.0.11:6443 --token o3ha14.vdjveh35zz0coqzz --discovery-token-ca-cert-hash sha256:334ee25422b82ba08a5f4341e1b65f23abf2fdd486a0f471cf3ad0824b258e13
按照上面輸出提示進行操作
[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p ~/k8s/ && cd ~/k8s
[root@k8s-master k8s]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#這里改成自己的網卡
# vim kube-flannel.yml args: - --ip-masq - --kube-subnet-mgr - --iface=eth0 # kubectl apply -f kube-flannel.yml
查看集群狀態

集群初始化如果遇到問題,可以使用下面的命令進行清理:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
2.3 安裝Pod Network
#這時master狀態為notready 是因為沒有網絡插件
[root@k8s-master k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 27m v1.12.1
接下來安裝flannel network add-on:
獲取組件健康狀態
[root@k8s-master k8s]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
[root@k8s-master k8s]# kubectl describe node k8s-master Name: k8s-master Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8s-master node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 17 Oct 2018 21:24:01 +0800 Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false


上面輸出顯示首先會下載一個flannel鏡像,namespace全部為running狀態, master為ready
2.4master參與工作負載
出於安全考慮Pod不會被調度到Master Node上,也就是說Master Node不參與工作負載。這是因為當前的master節點node1被打上了node-role.kubernetes.io/master:NoSchedule的污點:
[root@k8s-master k8s]# kubectl describe node k8s-master | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule
# 如果需要改回來不想讓master節點參與到工作負載
kubectl taint node k8s-master node-role.kubernetes.io/master="":NoSchedule
去除污點使k8s-master參與負載
[root@k8s-master k8s]# kubectl taint nodes k8s-master node-role.kubernetes.io/master-
node/k8s-master untainted
[root@k8s-master k8s]# kubectl describe node k8s-master | grep Taint
Taints: <none>
2.5測試DNS
kubectl run curl --image=radial/busyboxplus:curl -it


2.6 向Kubernetes集群中添加Node節點
下面我們將node1 node2這個主機添加到Kubernetes集群中, 在node1和node2上執行:
[root@k8s-node1 ~]# kubeadm join 10.0.0.11:6443 --token i4us8x.pw2f3botcnipng8e --discovery-token-ca-cert-hash sha256:d16ac747c2312ae829aa29a3596f733f920ca3d372d9f1b34d33c938be067e51

查看節點,

原因是節點k8-node1也要獲取鏡像,執行以上的獲取鏡像的腳本即可,兩個節點分別重置集群,kubeadm reset,然后重新初始化。

從master節點如果需要移出這個node1節點
在master節點上執行:
kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node1

