k8s 基於Kubeadm部署高可用集群


Kubeadm部署一個高可用集群

Kubernetes的高可用

Kubernetes的高可用主要指的是控制平面的高可用,即有多套Master節點組件和Etcd組件,工作節點通過負載均衡連接到各Master。

HA的2中部署方式

一種是將etcd與Master節點組件混布在一起

etcd與Master節點組件混布

另外一種方式是,使用獨立的Etcd集群,不與Master節點混布

獨立的Etcd集群,不與Master節點混布

兩種方式的相同之處在於都提供了控制平面的冗余,實現了集群高可以用,區別在於:

  • Etcd混布方式:
所需機器資源少
部署簡單,利於管理
容易進行橫向擴展
風險大,一台宿主機掛了,master和etcd就都少了一套,集群冗余度受到的影響比較大。
  • Etcd獨立部署方式:
所需機器資源多(按照Etcd集群的奇數原則,這種拓撲的集群關控制平面最少就要6台宿主機了)。
部署相對復雜,要獨立管理etcd集群和和master集群。
解耦了控制平面和Etcd,集群風險小健壯性強,單獨掛了一台master或etcd對集群的影響很小。

部署環境

服務器

master1 192.168.0.101 (master節點1)
master2 192.168.0.102 (master節點2)
master3 192.168.0.103 (master節點3)
haproxy 192.168.0.100 (haproxy節點,做3個master節點的負載均衡器)
master-1 192.168.0.104 (node節點)
主機 IP 備注
master1 192.168.0.101 master節點1
master2 192.168.0.102 master節點2
master3 192.168.0.103 master節點3
haproxy 192.168.0.100 haproxy節點,做3個master節點的負載均衡器
node 192.168.0.104 node節點

環境

主機:CentOS Linux release 7.7.1908 (Core)
core:3.10.0-1062.el7.x86_64
docker:19.03.7
kubeadm:1.17.3
資源 配置
主機 CentOS Linux release 7.7.1908 (Core)
主機core 3.10.0-1062.el7.x86_64
docker 19.03.7
kubeadm 1.17.3

部署步驟

關閉防火牆,swap,設置內核等

在所有節點上操作

  • 關閉selinux,firewall
setenforce  0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config 
systemctl stop firewalld
systemctl disable firewalld
  • 關閉swap,(1.8版本后的要求,目的應該是不想讓swap干擾pod可使用的內存limit)
swapoff -a
vim /etc/fstab

# 注釋掉swap行
  • 設置主機名
hostnamectl set-hostname [master|node]{X}
  • 設置域名解析(不設置可能會導致kubeadm init初始化超時)
vim /etc/hosts

192.168.0.101 master1
192.168.0.102 master2
192.168.0.103 master3
192.168.0.104 node
  • 修改下面內核參數,否則請求數據經過iptables的路由可能有問題
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安裝kubeadm、docker

在除了haproxy以外所有節點上操作

  • 將Kubernetes安裝源改為阿里雲,方便國內網絡環境安裝
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 安裝docker-ce
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce
systemctl start docker
systemctl enable docker
  • 安裝kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl

systemctl enable kubelet
systemctl start kubelet

安裝配置負載均衡

在haproxy節點操作

安裝haproxy

yum install haproxy -y 

修改haproxy配置

cat << EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kube-apiserver
    bind *:6443 # 指定前端端口
    mode tcp
    default_backend master

backend master # 指定后端機器及端口,負載方式為輪詢
    balance roundrobin
    server master1  192.168.0.101:6443 check maxconn 2000
    server master2  192.168.0.102:6443 check maxconn 2000
    server master3  192.168.0.103:6443 check maxconn 2000
EOF

開機默認啟動haproxy,開啟服務

systemctl enable haproxy
systemctl start haproxy

檢查服務端口情況

ss -tnlp | grep 6443
LISTEN     0      128          *:6443                     *:*                   users:(("haproxy",pid=1107,fd=4))

部署Kubernetes

在master1節點操作

生成init啟動配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

調整kubeadm-config.yaml文件,修改配置或新增配置

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.101    ##宿主機IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1    ##當前節點在k8s集群中名稱
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.0.100:6443"    ##前段haproxy負載均衡地址和端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers    ##使用阿里的鏡像地址,否則無法拉取鏡像
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"    ##此處填寫后期要安裝網絡插件flannel的默認網絡地址
  serviceSubnet: 10.96.0.0/12
scheduler: {}

執行節點初始化

# 通過阿里源預先拉鏡像
kubeadm  config images pull  --config kubeadm-config.yaml  

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

安裝成功,可以看到輸出

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

# master節點用以下命令加入集群:
  kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:37041e2b8e0de7b17fdbf73f1c79714f2bddde2d6e96af2953c8b026d15000d8 \
    --control-plane --certificate-key 8d3f96830a1218b704cb2c24520186828ac6fe1d738dfb11199dcdb9a10579f8

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

# 工作節點用以下命令加入集群
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:37041e2b8e0de7b17fdbf73f1c79714f2bddde2d6e96af2953c8b026d15000d8 

原來的kubeadm版本,join命令只用於工作節點的加入,而新版本加入了 --contaol-plane 參數后,控制平面(master)節點也可以通過kubeadm join命令加入集群了。

啟動flannel網絡

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

master節點查看集群

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

kubectl get no
NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   4h12m   v1.17.3

加入另外兩個master節點

# 在master(2|3)操作:
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:37041e2b8e0de7b17fdbf73f1c79714f2bddde2d6e96af2953c8b026d15000d8 \
    --control-plane --certificate-key 8d3f96830a1218b704cb2c24520186828ac6fe1d738dfb11199dcdb9a10579f8

加入node節點

# 在node操作
kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:37041e2b8e0de7b17fdbf73f1c79714f2bddde2d6e96af2953c8b026d15000d8 

查看集群

kubectl get no
NAME      STATUS   ROLES    AGE     VERSION
node      Ready    <none>   3h37m   v1.17.3
master1   Ready    master   4h12m   v1.17.3
master2   Ready    master   4h3m    v1.17.3
master3   Ready    master   3h54m   v1.17.3

后記

查看haproxy日志

以便k8s集群啟動有問題時排查問題

安裝rsyslog服務

yum install rsyslog

配置rsyslog采集日志

vim /etc/rsyslog.conf

# 修改配置

$ModLoad imudp
$UDPServerRun 514

# 新增配置
local2.*                                                /var/log/haproxy.log

重啟rsyslog

systemctl restart rsyslog
systemctl enable rsyslog

4層負載均衡使用nginx

安裝nginx

yum install nginx
systemctl start nginx
systemctl enable nginx

配置nginx文件

vim /etc/nginx/nginx.conf

# 在http{}段外面添加

stream {
    server {
        listen 6443;
        proxy_pass kube_apiserver;
    }

    upstream kube_apiserver {
        server 192.168.0.101:6443 max_fails=3 fail_timeout=5s;
        server 192.168.0.102:6443 max_fails=3 fail_timeout=5s;
        server 192.168.0.103:6443 max_fails=3 fail_timeout=5s;
    }
    log_format proxy '$remote_addr [$time_local] '
                 '$protocol $status $bytes_sent $bytes_received '
                 '$session_time "$upstream_addr" '
                 '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
   access_log /var/log/nginx/proxy-access.log proxy;
}

重啟nginx

systemctl restart nginx

參考文檔

https://segmentfault.com/a/1190000018741112


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM