Rancher管理k8s集群


Rancher管理看k8s集群

一、Rancher介紹

1.1、Rancher簡介

Rancher是一個開源的企業級多集群Kubernetes管理平台,實現了Kubernetes集群在混合雲+本地數據中心的集中部署與管理,以確保集群的安全性,加速企業數字化轉型。

超過40,000家企業每天使用Rancher快速創新。

官網:https://docs.rancher.cn/

1.2、Rancher和k8s的區別

Rancher和k8s都是用來作為容器的調度與編排系統。但是rancher不僅能夠管理應用容器,更重要的一點是能夠管理k8s集群。Rancher2.x底層基於k8s調度引擎,通過Rancher的封裝,用戶可以在不熟悉k8s概念的情況下輕松的通過Rancher來部署容器到k8s集群當中。

二、實驗環境

K8S集群角色 Ip 主機名 版本
控制節點 192.168.40.180 k8s-master1 v1.20.6
工作節點 192.168.40.181 k8s-node1 v1.20.6
工作節點 192.168.40.182 k8s-node2 v1.20.6
rancher 192.168.40.138 rancher v2.5.7

三、Rancher安裝及配置

3.1、安裝rancher

[root@k8s-master1 ~]# docker pull rancher/rancher-agent:v2.5.7
[root@rancher ~]# docker pull rancher/rancher:v2.5.7

# 注:unless-stopped,在容器退出時總是重啟容器,但是不考慮在Docker守護進程啟動時就已經停止了的容器
[root@rancher ~]# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged --name rancher rancher/rancher:v2.5.7

[root@rancher ~]# docker ps -a|grep rancher
a893cc6d7bc3   rancher/rancher:v2.5.7   "entrypoint.sh"   3 seconds ago   Up 2 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   rancher

3.2、登錄Rancher平台

在瀏覽器訪問rancher的ip地址:由於未使用授信證書,會有報警,忽略即可

image-20210713213925972

image-20210713213949723

image-20210713214017782

設置中文:

image-20210713214228815

四、Rancher管理已存在的k8s集群

選擇添加集群,並導入存在的集群

image-20210713214310239

image-20210713214400362

image-20210713214439651

在k8s控制節點k8s-master1上執行上面箭頭所指的命令

[root@k8s-master1 ~]# curl --insecure -sfL https://192.168.40.138/v3/import/7jzb5nnjjjpqnqnpv9g6p26z4j4c5qncgbttwlr8s2gfl2qk7th6x6_c-n5w99.yaml | kubectl apply -f -
error: no objects passed to apply

# 再執行一次:
[root@k8s-master1 ~]# curl --insecure -sfL https://192.168.40.138/v3/import/7jzb5nnjjjpqnqnpv9g6p26z4j4c5qncgbttwlr8s2gfl2qk7th6x6_c-n5w99.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-6539558 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created

[root@k8s-master1 ~]# kubectl get ns
NAME              STATUS   AGE
cattle-system     Active   7m4s
default           Active   5d1h
fleet-system      Active   5m34s
kube-node-lease   Active   5d1h
kube-public       Active   5d1h
kube-system       Active   5d1h
[root@k8s-master1 ~]# kubectl get pods -n cattle-system 
NAME                                    READY   STATUS    RESTARTS   AGE
cattle-cluster-agent-6bdf9bfddd-77vtd   1/1     Running   0          6m5s
[root@k8s-master1 ~]# kubectl get pods -n fleet-system 
NAME                           READY   STATUS    RESTARTS   AGE
fleet-agent-55bfc495bd-8xgsd   1/1     Running   0          3m55s

image-20210713221351436

組件不健康問題解決:

# 原因
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}

# 修改kube-scheduler的配置文件
[root@k8s-master1 prometheus]# vim /etc/kubernetes/manifests/kube-scheduler.yaml

# 修改如下內容
1)把--bind-address=127.0.0.1變成--bind-address=192.168.40.180 #192.168.40.180是k8s的控制節點k8s-master1的ip
2)把httpGet:字段下的hosts由127.0.0.1變成192.168.40.180(有兩處)
3)把—port=0刪除

# 重啟各個節點的kubelet
[root@k8s-node1 ~]# systemctl restart kubelet
[root@k8s-node2 ~]# systemctl restart kubelet

# 相應的端口已經被物理機監聽了
[root@k8s-master1 prometheus]# ss -antulp | grep :10251	
tcp    LISTEN     0      128      :::10251                :::*                   users:(("kube-scheduler",pid=36945,fd=7))

# 修改kube-controller-manager的配置文件
[root@k8s-master1 prometheus]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml

# 修改如下內容
1)把--bind-address=127.0.0.1變成--bind-address=192.168.40.180 #192.168.40.180是k8s的控制節點k8s-master1的ip
2)把httpGet:字段下的hosts由127.0.0.1變成192.168.40.180(有兩處)
3)把—port=0刪除

# 重啟各個節點的kubelet
[root@k8s-node1 ~]# systemctl restart kubelet
[root@k8s-node2 ~]# systemctl restart kubelet

# 查看狀態
[root@k8s-master1 prometheus]# kubectl get cs 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

[root@k8s-master1 prometheus]# ss -antulp | grep :10252
tcp    LISTEN     0      128      :::10252                :::*                   users:(("kube-controller",pid=41653,fd=7))

image-20210713222207454

五、Rancher部署監控系統

1)啟用Rancher集群級別監控,啟動監控時間可能比較長,需要等10-20分鍾

image-20210713215444227

監控組件版本選擇0.2.1,其他的默認就可以了,點啟用監控

image-20210713215825009

image-20210713222340798

2)集群監控

image-20210713220226353

3)kubernetes組件監控

image-20210713220319969

4)Rancher日志收集功能監控

image-20210713220400549

六、Rncher儀表盤管理k8s集群:部署tomcat服務

1)創建名稱空間namespace

image-20210713222640942

image-20210713222734742

2)創建Deployment資源

image-20210713223508001

image-20210713223544704

image-20210713223647279

image-20210713223710658

3)創建service

image-20210713223852478

image-20210713224040976

image-20210713224124001

image-20210713224146382

image-20210713224209701

4)點擊節點端口 30180/TCP,可以訪問內部的tomcat了

image-20210713224328847


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM