官網推薦架構及節點角色要求:
- Rancher 的 DNS 應該被解析到四層負載均衡器上。
- 負載均衡器應該把 TCP/80 端口和 TCP/443 端口的流量轉發到集群中全部的 3 個節點上。
- Ingress Controller 將把 HTTP 重定向到 HTTPS,在 TCP/443 端口使用 SSL/TLS。
- Ingress Controller 把流量轉發到 Rancher Server 的 pod 的 80 端口。
在Kubernetes集群中安裝Rancher,並使用四層負載均衡,SSL終止在Ingress Controller中
-
RKE 每個節點至少需要一個角色,但並不強制每個節點只能有一個角色。但是,對於運行您的業務應用的集群,我們建議為每個節點使用單獨的角色,這可以保證工作節點上的工作負載不會干擾 Kubernetes Master 或集群數據。
以下是我們對於下游集群的最低配置建議:
- 三個只有 etcd 角色的節點 保障高可用性,如果這三個節點中的任意一個出現故障,還可以繼續使用。
- 兩個只有 controlplane 角色的節點 這樣可以保證 master 組件的高可用性。
- 一個或多個只有 worker 角色的節點 用於運行 Kubernetes 節點組件和您部署的服務或應用。
在安裝 Rancher Server 時三個節點,每個節點都有三個角色是安全的,因為:
- 可以允許一個 etcd 節點失敗
- 多個 controlplane 節點使 master 組件保持多實例的狀態。
- 該集群有且只有 Rancher 在運行。
服務器信息:
服務器 | 公網地址 | rancher私網地址 | 用途及角色 | 服務器配置 |
nginx | 192.168.198.130 | 反向代理及負載均衡 | 1c 2g | |
rancher1 | 192.168.198.150 | 10.10.10.150 | controlplane, worker, etcd | 2c 4g |
rancher2 | 192.168.198.151 | 10.10.10.151 | controlplane, worker, etcd | 2c 4g |
rancher3 | 192.168.198.152 | 10.10.10.152 | controlplane, worker, etcd | 2c 4g |
因為機器配置有限,只能用最低要求三節點的rancher,然后每個節點都承擔了三個角色。
操作系統版本:CentOS Linux release 7.7.1908 (Core)
操作系統內核版本:5.7.4-1.el7.elrepo.x86_64
docker版本:Docker version 19.03.11, build 42e35e61f3
1.升級操作系統內存(操作系統內核升級到了最新版本,參考博客:https://www.cnblogs.com/ding2016/p/10429640.html)
2.安裝最新版本docker(各rancher節點)
2.1 yum install -y yum-utils device-mapper-persistent-data lvm2
2.2 添加軟件源信息
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.3 安裝docker(不指定版本默認安裝最新版本的docker)
yum -y install docker-ce
查看當前docker版本
docker version
3.關閉防火牆及selinux(各rancher節點及nginx服務器)
3.1關閉防火牆
systemctl stop firewalld.service
3.2禁止防火牆開機服務
systemctl disable firewalld.service
3.3關閉selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
4.配置阿里雲鏡像加速(各rancher節點)
根據自己的鏡像配置修改:
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{ "registry-mirrors": ["https://hcepoa2b.mirror.aliyuncs.com"] }
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
5.rancher用戶及互信配置(各rancher節點)
rancher安裝時,不能使用root用戶。
5.1新建rancher用戶
groupadd docker
useradd rancher -G docker
echo "123456" | passwd --stdin rancher
5.2rancher用戶各節點需要配置互信
su - rancher
各節點密鑰生成
ssh-keygen -t rsa
rancher1節點執行公鑰導入:
ssh 192.168.198.150 cat /home/rancher/.ssh/id_rsa.pub >>/home/rancher/.ssh/authorized_keys
ssh 192.168.198.151 cat /home/rancher/.ssh/id_rsa.pub >>/home/rancher/.ssh/authorized_keys
ssh 192.168.198.152 cat /home/rancher/.ssh/id_rsa.pub >>/home/rancher/.ssh/authorized_keys
chmod 600 /home/rancher/.ssh/authorized_keys
將authorized_key復制到rancher2、rancher3
scp authorized_keys rancher@192.168.198.151:/home/rancher/.ssh/
scp authorized_keys rancher@192.168.198.152:/home/rancher/.ssh/
6.安裝Kubernetes(rke集群安裝)
6.1安裝 Kubernetes 命令行工具 kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum install -y kubectl
6.2安裝RKE,Rancher Kubernetes Engine,一種 Kubernetes 分發版和命令行工具
下載地址:https://github.com/rancher/rke/releases
Release v1.1.3
我使用rke_linux-amd64,將工具上傳到/home/rancher
mv rke_linux-amd64 rke
6.3創建rancher-cluster.yml
文件
[root@rancher1 rancher]# cat /home/racher/rancher-cluster.yml
nodes:
- address: 192.168.198.150
internal_address: 10.10.10.150
user: rancher
role: [controlplane, worker, etcd]
- address: 192.168.198.151
internal_address: 10.10.10.151
user: rancher
role: [controlplane, worker, etcd]
- address: 192.168.198.152
internal_address: 10.10.10.152
user: rancher
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# 當使用外部 TLS 終止,並且使用 ingress-nginx v0.22或以上版本時,必須。
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
文件參數說明:
選項 | 必填 | 描述 |
address | yes | 公用 DNS 或 IP 地址 |
user | yes | 可以運行 docker 命令的用戶 |
role | yes | 分配給節點的 Kubernetes 角色列表 |
internal_address | yes | 內部集群流量的專用 DNS 或 IP 地址 |
ssh_key_path | no | 用於對節點進行身份驗證的 SSH 私鑰的路徑(默認為~/.ssh/id_rsa ) |
6.4運行rke
cd /home/rancher
./rke up --config rancher-cluster.yml
完成后,它應該以這樣一行結束: Finished building Kubernetes cluster successfully.
6.5安裝過程出現問題需要重新安裝,手動清理節點
# 停止服務
systemctl disable kubelet.service
systemctl disable kube-scheduler.service
systemctl disable kube-proxy.service
systemctl disable kube-controller-manager.service
systemctl disable kube-apiserver.service
systemctl stop kubelet.service
systemctl stop kube-scheduler.service
systemctl stop kube-proxy.service
systemctl stop kube-controller-manager.service
systemctl stop kube-apiserver.service
# 刪除所有容器
docker rm -f $(docker ps -qa)
# 刪除所有容器卷
docker volume rm $(docker volume ls -q)
# 卸載mount目錄
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
# 備份目錄
mv /etc/kubernetes /etc/kubernetes-bak-$(date +"%Y%m%d%H%M")
mv /var/lib/etcd /var/lib/etcd-bak-$(date +"%Y%m%d%H%M")
mv /var/lib/rancher /var/lib/rancher-bak-$(date +"%Y%m%d%H%M")
mv /opt/rke /opt/rke-bak-$(date +"%Y%m%d%H%M")
# 刪除殘留路徑
rm -rf /etc/ceph \
/etc/cni \
/opt/cni \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/cni \
/var/lib/kubelet \
/var/log/containers \
/var/log/pods \
/var/run/calico
# 清理網絡接口
network_interface=`ls /sys/class/net`
for net_inter in $network_interface;
do
if ! echo $net_inter | grep -qiE 'lo|docker0|eth*|ens*';then
ip link delete $net_inter
fi
done
# 清理殘留進程
port_list=`80 443 6443 2376 2379 2380 8472 9099 10250 10254`
for port in $port_list
do
pid=`netstat -atlnup|grep $port |awk '{print $7}'|awk -F '/' '{print $1}'|grep -v -|sort -rnk2|uniq`
if [[ -n $pid ]];then
kill -9 $pid
fi
done
pro_pid=`ps -ef |grep -v grep |grep kube|awk '{print $2}'`
if [[ -n $pro_pid ]];then
kill -9 $pro_pid
fi
# 清理Iptables表
## 注意:如果節點Iptables有特殊配置,以下命令請謹慎操作
sudo iptables --flush
sudo iptables --flush --table nat
sudo iptables --flush --table filter
sudo iptables --table nat --delete-chain
sudo iptables --table filter --delete-chain
systemctl restart docker
6.6測試集群,使用kubectl
測試您的連通性,並查看您的所有節點是否都處於Ready
狀態
[rancher@rancher1 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.198.150 Ready controlplane,etcd,worker 30h v1.18.3
192.168.198.151 Ready controlplane,etcd,worker 30h v1.18.3
192.168.198.152 Ready controlplane,etcd,worker 30h v1.18.3
6.7檢查pod運行情況
[rancher@rancher1 ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-cluster-agent-57d9454cb6-sn2qk 1/1 Running 0 2m54s
cattle-system cattle-node-agent-4rh9f 1/1 Running 0 104s
cattle-system cattle-node-agent-jfvzr 1/1 Running 0 117s
cattle-system cattle-node-agent-k47vb 1/1 Running 0 2m
cattle-system rancher-64b9795c65-cdmkg 1/1 Running 0 4m22s
cattle-system rancher-64b9795c65-gr6ld 1/1 Running 0 4m22s
cattle-system rancher-64b9795c65-stnfv 1/1 Running 0 4m22s
ingress-nginx default-http-backend-598b7d7dbd-lw97l 1/1 Running 2 35h
ingress-nginx nginx-ingress-controller-424pp 1/1 Running 2 35h
ingress-nginx nginx-ingress-controller-k664p 1/1 Running 2 35h
ingress-nginx nginx-ingress-controller-twlql 1/1 Running 2 35h
kube-system canal-8m7xt 2/2 Running 4 35h
kube-system canal-fr5kf 2/2 Running 4 35h
kube-system canal-gj9sb 2/2 Running 4 35h
kube-system coredns-849545576b-dknjf 1/1 Running 2 35h
kube-system coredns-849545576b-dr4hk 1/1 Running 2 35h
kube-system coredns-autoscaler-5dcd676cbd-dtzjg 1/1 Running 2 35h
kube-system metrics-server-697746ff48-j6hll 1/1 Running 2 35h
kube-system rke-coredns-addon-deploy-job-lg99g 0/1 Completed 0 35h
kube-system rke-ingress-controller-deploy-job-d9nlc 0/1 Completed 0 35h
kube-system rke-metrics-addon-deploy-job-9zcmb 0/1 Completed 0 35h
kube-system rke-network-plugin-deploy-job-rvbdr 0/1 Completed 0 35h
7.安裝rancher
7.1安裝cli工具
kubectl - Kubernetes 命令行工具。
helm - Kubernetes 的軟件包管理工具。請參閱 Helm 版本要求以選擇要安裝 Rancher 的 Helm 版本。
kubectl之前已經安裝。
需要安裝helm ,下載地址(https://github.com/helm/helm/releases)
下載helm-v3.2.4-linux-amd64.tar,上傳到/home/rancher目錄。
7.2添加helm chart倉庫
helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable
7.3為 Rancher 創建 Namespace
kubectl create namespace cattle-system
7.4使用自簽名SSL證書安裝安裝Rancher Server
[rancher@rancher1 ~]$ mkdir ssl
[rancher@rancher1 ~]$ cd ssl/
#先把一鍵生成ssl證書的腳本另存為create_self-signed-cert.sh(腳本地址:https://rancher2.docs.rancher.cn/docs/installation/options/self-signed-ssl/_index/)
#然后再用這個腳本生成ssl證書
[rancher@node1 ssl]$ ./create_self-signed-cert.sh --ssl-domain=rancher.my.org --ssl-trusted-ip=192.168.198.130 --ssl-size=2048 --ssl-date=3650
[rancher@rancher1 ssl]$ ls
cacerts.pem cacerts.srl cakey.pem openssl.cnf rancher.my.org.crt rancher.my.org.csr rancher.my.org.key ssl.sh tls.crt tls.key
#服務證書和私鑰密文
kubectl -n cattle-system create \
secret tls tls-rancher-ingress \
--cert=/home/rancher/ssl/tls.crt \
--key=/home/rancher/ssl/tls.key
#ca證書密文
kubectl -n cattle-system create secret \
generic tls-ca \
--from-file=/home/rancher/ssl/cacerts.pem
安裝rancher
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
8.安裝nginx
192.168.198.130安裝nginx
配置文件如下(參考地址:https://rancher2.docs.rancher.cn/docs/installation/options/nginx/_index):
[root@localhost nginx]# cat /etc/nginx/nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server 192.168.198.150:80 max_fails=3 fail_timeout=5s;
server 192.168.198.151:80 max_fails=3 fail_timeout=5s;
server 192.168.198.152:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server 192.168.198.150:443 max_fails=3 fail_timeout=5s;
server 192.168.198.151:443 max_fails=3 fail_timeout=5s;
server 192.168.198.152:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
9.訪問地址rancher.my.org
進入system
查看紅框deployment是否正常
如果不正常查看容器日志,如果日志中報域名解析相關的問題。
編輯deployment,網絡添加主機別名
參考文章:
https://rancher2.docs.rancher.cn/docs/installation/k8s-install/_index/
https://www.bookstack.cn/read/rancher-v2.x/74435b0bf28d8990.md
https://blog.csdn.net/wc1695040842/article/details/105253706/