kubeadm 搭建高可用k8s1.21集群的方法
- 本文學習自:
https://www.cnblogs.com/wjhlinux/p/14422021.html
第0部分: 整理的部分腳本
- 導出所有的images的方法
mkdir /k8s1.21 && cd /k8s1.21
for i in `docker images |grep -v REP |awk '{print $1}'` ; do j=`docker images |grep $i |awk '{print $2}'` && docker save $i:$j > ${i##*/}.tar ; done
- 導入images的辦法
分發images
執行
cd /k8s1.21/ && for i in `ls` ; do docker load < $i ; done
導出的鏡像內容為:
[root@k8s-master01 k8s1.21]# du -ah
121M ./kube-apiserver.tar
119M ./kube-proxy.tar
116M ./kube-controller-manager.tar
50M ./kube-scheduler.tar
42M ./coredns.tar
243M ./etcd.tar
186M ./node.tar
9.3M ./pod2daemon-flexvol.tar
157M ./cni.tar
47M ./kube-controllers.tar
178M ./rabbitmq.tar
680K ./pause-amd64.tar
1.3G .
- 盡量使用 阿里雲的鏡像, 避免無法連接境外網絡.
第1部分: 虛擬機規划
1.1 網絡部分
序號 | 主機名 | ip地址 | 備注說明 |
---|---|---|---|
1 | k8s-master01 | 10.110.83.231 | master |
2 | k8s-master02 | 10.110.83.232 | master |
3 | k8s-master02 | 10.110.83.233 | master |
4 | k8s-node01 | 10.110.83.234 | node |
5 | k8s-node02 | 10.110.83.235 | node |
6 | k8s-master | 10.110.83.230 | vip |
1.2 主機創建情況
創建虛擬機 進行主機配置, 包含修改內核參數, 互信ssh等操作,然后導入images 等操作.
修改 /etc/hosts 文件 添加 網絡設置里面的 六個機器 主機名和 ip地址.
注意建議使用 centos7 的版本,然后升級到 kernel 4.19 的內核版本
1.3 克隆前設置
- 修改部分repo 設置.
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 必備組件安裝以及設置
systemctl disable firewalld && systemctl disable dnsmasq
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
- 安裝備份組件
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate keepalived haproxy -y
- 修改環境變量以及其他參數
注意前面兩處需要手工添加 第三處 直接cat 執行命令即可.
vim /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
at <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
- 互信操作
ssh-keygen
ssh-copy-id root@10.110.83.230
ssh-copy-id root@127.0.0.1
- 升級內核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
# 設置完成 執行重啟
reboot
- 安裝組件
yum install docker-ce-19.03.* -y
安裝完成后需要設置為開機自動啟動
systemctl enable docker && systemctl restart docker
# 修改參數,不然啟動會報錯, 我這邊就忘記修改報錯了, 原作者寫的很詳細
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
安裝 kubeadm
yum install kubeadm -y
修改使用阿里雲鏡像
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF
設置開機啟動
systemctl enable kubelet && systemctl restart kubelet
- 高可用網絡設置
- haproxy
vim /etc/haproxy/haproxy.cfg
# 添加內容為: 注意需要根據ip 規划修改自己的ip地址等內容.
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 10.110.83.201:6443 check
server k8s-master02 10.110.83.202:6443 check
server k8s-master03 10.110.83.203:6443 check
- KeepAlived
注意三個master都需要進行設置,可以在克隆前設置后 克隆完修改ip地址之后 在修改本腳本.
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 這里要修改成ifconfig 查出來本地局域網ip地址對應的網卡信息
mcast_src_ip 10.110.82.23x #這里需要修改為具體的master機器地址
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.110.83.230 #這里需要修改成 vip 的地址
}
track_script {
chk_apiserver
}
}
注意 最后的 track 腳本需要手動添加 主要內容如下:
注意 需要添加可執行權限: chmod +x /etc/keepalived/check_apiserver.sh
vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
設置高可用服務自動啟動
systemctl enable keepalived && systemctl enable haproxy
- 注意1: 設置完重啟機器, 可以驗證 vip 是否能夠通 不通 需要重新處理.
- 注意2: 只需要mster 設置 haproxy和keepalived node 節點可以不處理.
1.4 設置完成之后 就可以clone 成 五個虛擬機 並且設置好ip地址了.
第二部分 集群初始化
2.1 創建配置文件
- 注意1: 原教程說需要每個master 進行創建, 但是我發現只需要創建一個 其他的使用 join的方式既可以.
- 注意2: 里面有三個ip地址, 第一個是當前master的地址, 第二個和第三個是vip的地址, 一定要分辨清楚.
- 注意3: 原教程里面使用該文件執行了images的pull 操作, 我這次將images 導出出來了, 所以可以只執行導入操作.
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.110.83.231
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 10.110.83.230
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.110.83.230:16443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.4
networking:
dnsDomain: cluster.local
podSubnet: 172.168.0.0/12
serviceSubnet: 10.96.0.0/12
scheduler: {}
執行鏡像拉取的命令為:
kubeadm config images pull --config /root/kubeadm-config.yaml
2.2 master01 節點初始化
kubeadm init --config /root/kubeadm-config.yaml --upload-certs
- 注意1: master 節點加入和node節點加入的命令是不一樣的:
- 注意2: 必須進行 修改當前用戶下面的 .kube/config 文件才可以進行相關的kubectl 操作.
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.110.83.200:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:cd36fbf1a304695af05c9438e2ea1388d0f9915fa4cda8d09ab9e65f6f6cd1d3 \
--control-plane --certificate-key 79a90c89ec3e5a3d8cdf01888b65a1564dcc50d7a62fe1d625df8e9b16f223f8
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.110.83.200:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:cd36fbf1a304695af05c9438e2ea1388d0f9915fa4cda8d09ab9e65f6f6cd1d3
2.3 其他節點加入
- 注意1: 獲取到如上命令之后 在 master節點執行 帶 control-plane的命令, node節點的加入不需要進行這樣的處理
- 注意2: 如果想執行kubectl 相關操作, 必須跟master01 節點一樣 將 kube.config 放到環境變量里面或者是 用戶家目錄指定目錄下面.
- 注意3: 節點加入之后會提示NotReady, 需要創建網絡組建才可以
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 174m v1.21.0
k8s-master02 NotReady control-plane,master 172m v1.21.0
k8s-master03 NotReady control-plane,master 171m v1.21.0
k8s-node01 NotReady <none> 170m v1.21.0
k8s-node02 NotReady <none> 170m v1.21.0
2.4 安裝calico
拉取相關文件:
wget https://docs.projectcalico.org/v3.8/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
在當前目錄下執行
kubectl apply -f calico.yaml
如果五個虛擬機內部都有了相關的images 一會兒就會可以使用了下:
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 5h22m v1.21.0
k8s-master02 Ready control-plane,master 5h21m v1.21.0
k8s-master03 Ready control-plane,master 5h20m v1.21.0
k8s-node01 Ready <none> 5h18m v1.21.0
k8s-node02 Ready <none> 5h19m v1.21.0
第三部分 安裝helm以及簡單使用
- 下載
wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz
解壓縮以及將文件放到 /usr/bin 目錄下即可
添加倉庫
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update
安裝ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm pull ingress ingress-nginx/ingress-nginx
修改配置文件進行簡單處理
tar -xf ingress-nginx-3.30.0.tgz && cd ingress-nginx
vim values.yaml
# 修改controller鏡像地址
repository: registry.cn-beijing.aliyuncs.com/dotbalo/controller
# dnsPolicy
dnsPolicy: ClusterFirstWithHostNet
# 使用hostNetwork,即使用宿主機上的端口80 443
hostNetwork: true
# 使用DaemonSet,將ingress部署在指定節點上
kind: DaemonSet
# 節點選擇,將需要部署的節點打上ingress=true的label
nodeSelector:
kubernetes.io/os: linux
ingress: "true"
# 修改type,改為ClusterIP。如果在雲環境,有loadbanace可以使用loadbanace
type: ClusterIP
# 修改kube-webhook-certgen鏡像地址
registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen
進行安裝
kubectl create ns ingress-nginx
helm install ingress-nginx -n ingress-nginx
kubectl get pods -n ingress-nginx -owide
本部分學習自:
https://www.cnblogs.com/bigberg/p/13926052.html
第四部分 nfs處理
- 這一段改到StorageClass學習里面了.
helm install nfs-client-provisioner \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=true \
--set nfs.server=10.110.83.201 \
--set nfs.path=/k8snfs \
apphub/nfs-client-provisioner