隨着官方部署工具kubeadm越來越成熟,k8s的部署也變得相對簡單,生產部署也變得容易了很多。下面演示一下怎么用kubeadm引導一個高可用的k8s集群
堆疊方式部署 :
默認的部署模式,apiserver只和本地etcd通信
優點:方便部署和管理
缺點:etcd和控制節點耦合
外部etcd:
apiserver和etcd集群通信
優點:具有良好的高可用性
缺點:需要額外管理一個etcd集群增加管理成本
-
初始化機器環境
IP | 角色 | 主機名(建議使用dns) | 系統 |
192.168.1.1 | lb | api-lb.k8s.com | centos7 |
192.168.1.2 | master | m1.k8s.com | centos7 |
192.168.1.3 | master | m2.k8s.com | centos7 |
192.168.1.4 | master | m3.k8s.com | centos7 |
在 api-lb.k8s.com 上配置:
配置haproxy
frontend kube-apiserver bind *:6443 default_backend kube-apiserver mode tcp option tcplog backend kube-apiserver balance source mode tcp server master1 192.168.101.160:6443 check server master2 192.168.101.161:6443 check server master3 192.168.101.162:6443 check
在 m[1:3].k8s.com 配置:
確保iptables工具可以處理網橋流量
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
確保iptables工具后端不使用nftables
update-alternatives --set iptables /usr/sbin/iptables-legacy
安裝docker
yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y install docker-ce systemctl enable --now docker
配置鏡像加速
tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": [ "http://dockerhub.azk8s.cn" ] } EOF systemctl daemon-reload systemctl restart docker
安裝kubeadm, kubelet , kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
如果沒有使用swap需要配置
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' >/etc/sysconfig/kubelet
-
使用kubeadm部署
在 m[1:3].k8s.com 配置:
建議先把需要的鏡像安裝好
kubeadm config images list --kubernetes-version=v1.17.4
W0320 15:26:32.612945 123330 validation.go:28] Cannot validate kubelet config - no validator is available
W0320 15:26:32.612995 123330 validation.go:28] Cannot validate kube-proxy config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
使用azure提供的國內源加速
kubeadm config images pull --image-repository gcr.azk8s.cn/google_containers --kubernetes-version=v1.17.4
懶得打tag可以在pull一下
kubeadm config images pull --kubernetes-version=v1.17.4
在 m1.k8s.com 配置:
創建第一個master節點,不出意外幾分鍾就好了
kubeadm init --kubernetes-version=v1.17.4 \ --apiserver-advertise-address=192.168.101.160 \ --control-plane-endpoint=kube-api-lb.k8s.com:6443 \ --pod-network-cidr=10.64.0.0/16 \ --service-cidr=10.32.0.0/16 \ --upload-certs
--pod-network-cidr=10.64.0.0/16 等一下要跟CNI網絡插件的子網一致
--control-plane-endpoint=kube-api-lb.k8s.com:6443 需要高可用的時候才需要配置,api-lb.k8s.com 需要解析到192.168.1.1
--upload-certs 控制節點共享證書,否則需要手動復制證書
安裝完后輸出
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io \
--discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f \
--control-plane --certificate-key a88aadb2fdda79e0ae2b8e94a60f020da56d0f220533defa9d5108035a4b9662
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io \
--discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f
配置kubectl
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
在 m[2:3].k8s.com 配置:
另外兩台master節點加入
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io \ --discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f \ --control-plane --certificate-key a88aadb2fdda79e0ae2b8e94a60f020da56d0f220533defa9d5108035a4b9662
如果是worker節點則使用
kubeadm join api-lb.k8s.com:6443 --token 17b0vs.stsr9ocosv2gr0io \ --discovery-token-ca-cert-hash sha256:66b2cd34026a290ac89997f7a8cc40b9d09fa62da153881c16c2262e69284f3f
-
安裝網絡插件
在 m1.k8s.com 配置::
安裝完成之后會發現節點的狀態是NotReady
kubectl get node NAME STATUS ROLES AGE VERSION m1.k8s.com NotReady master 9m27s v1.17.4 m2.k8s.com NotReady master 2m12s v1.17.4 m3.k8s.com NotReady master 2m5s v1.17.4
查看kubelet會發現是網絡插件沒裝
systemctl status kubelet.service
Mar 20 16:00:37 m1.k8s.com kubelet[15808]: E0320 16:00:37.274005 15808 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPlu...initialized
Mar 20 16:00:40 m1.k8s.com kubelet[15808]: W0320 16:00:40.733305 15808 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
安裝flannel插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -O kube-flannel.yml sed -i 's/10.244.0.0/10.64.0.0/g' kube-flannel.yml kubectl apply -f kube-flannel.yml
其他網絡插件安裝可以查看這里:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network