Rancher v2.4.8 容器管理平台-集群搭建(基於k8s)


整體概要

1、准備VMware+Ubuntu(ubuntu-20.04-live-server-amd64.iso)三台,一主兩從(master,node1,node2)

2、在三台服務器上安裝 docker 

3、在master 主節點上使用docker 啟動rancher

4、登錄UI管理界面,添加集群

5、復制添加集群命令,在各node節點上執行(需要等待一會)

一、硬件需求

服務器系統 節點IP 節點類型 服務器-內存/CUP hostname
Ubuntu 20.04 192.168.1.106 主節點 2G/4核 master
Ubuntu 20.04 192.168.1.108 工作節點1 2G/4核 node1
Ubuntu 20.04  192.168.1.109 工作節點2 2G/4核 node2

二、環境准備

1、VMware 虛擬機安裝ubuntu-20.04.3-live-server-amd64.iso 穩定版系統,並配置固定IP。(此處安裝步驟省略.....之前文檔有寫)

三、安裝docker

# 卸載舊版本
sudo apt-get remove docker docker-engine docker.io containerd runc
# 更新包索引
sudo apt-get update
# 允許使用apt通過https使用存儲庫
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
# 添加Docker官方的GPG密鑰
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# 設置穩定存儲庫
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# 更新包索引
sudo apt-get update
# 安裝docker 引擎
sudo apt-get install docker-ce docker-ce-cli containerd.io
# 將普通用戶添加到docker組
sudo gpasswd -a $user docker
# 更新docker組
newgrp docker
# 查看版本
docker version

四、安裝rancher

sudo docker run -d --privileged --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.4.8
yang@master:~$ sudo docker run -d --privileged --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.4.8
Unable to find image 'rancher/rancher:v2.4.8' locally
v2.4.8: Pulling from rancher/rancher
f08d8e2a3ba1: Pull complete 
3baa9cb2483b: Pull complete 
94e5ff4c0b15: Pull complete 
1860925334f9: Pull complete 
ff9fca190532: Pull complete 
9edbd5af6f75: Pull complete 
39647e735cf8: Pull complete 
3470d6dc42b2: Pull complete 
0dceba04daf4: Pull complete 
4ef3bd369bd9: Pull complete 
72d28ebec0e3: Pull complete 
3071d34067a8: Pull complete 
7b7c203ef611: Pull complete 
ed9cc207940b: Pull complete 
687ea77f4cb7: Pull complete 
b390c49bee0c: Pull complete 
d2ae58f8a2c4: Pull complete 
e82824cbbb83: Pull complete 
2cca9f7c734e: Pull complete 
Digest: sha256:5a16a6a0611e49d55ff9d9fbf278b5ca2602575de8f52286b18158ee1a8a5963
Status: Downloaded newer image for rancher/rancher:v2.4.8
ba1afc6482db94f2c5d9553286bd0a11c5df78b7f3106164e894a66b9e18c9cc

注:等待下載鏡像,並啟動,啟動后使用本機真實IP訪問

訪問路徑:

http://本機真實IP (默認端口80)

修改新密碼並確認密碼

我同意使用 Rancher 的條款和條件,然后點擊Continue(繼續).

點擊   保存 URL

五、安裝kubectl命令行工具

1、軟件包地址:https://github.com/kubernetes/kubernetes

2、點擊二進制下載鏈接在 CHANGELOG 。

3、找到Client Binaries(也就是kubernetes,包里面包含了kubectl),選擇對應操作系統的客戶端(我這里是linux 的ubuntu系統,amd64位),然后復制連接地址或點擊下載。

 4、上傳kubernetes-client-linux-amd64.tar.gz到master服務器

yang@master:~/ya$ ls
kubernetes-client-linux-amd64.tar.gz
yang@master:~/ya$ tar xf kubernetes-client-linux-amd64.tar.gz 
yang@master:~/ya$ cd kubernetes/client/bin/
yang@master:~/ya/kubernetes/client/bin$ sudo chmod +x kubectl
yang@master:~/ya/kubernetes/client/bin$ sudo mv ./kubectl /usr/local/bin/kubectl
# 查看版本,返回版本信息,說明安裝成功 yang@master:/usr/local/bin$ ./kubectl version --client Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}

六、添加集群

1、登錄進rancher中,點擊添加集群

 2、選擇自定義

3、填寫集群名稱

 4、修改NodePort為1-65535

5、點擊下一步,即可創建完成。

 6、選擇主機選項,Etcd,Control,Worker.復制生成的命令到master服務器上執行。

 7、master服務器上執行

yang@master:~$ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.8 --server https://192.168.1.106 --token vqfrscgwp4wnwsndpplc9p5kkcvhwxlrhpx42fd8gpj64vlnz49cvm --ca-checksum a056c9a4d6fe40e2fb7d0b7aed3241ad79352e9750e8034d527a6d71cb0cf82b --etcd --controlplane --worker
Unable to find image 'rancher/rancher-agent:v2.4.8' locally
v2.4.8: Pulling from rancher/rancher-agent
f08d8e2a3ba1: Already exists 
3baa9cb2483b: Already exists 
94e5ff4c0b15: Already exists 
1860925334f9: Already exists 
e5d12d0f9a84: Pull complete 
5116e686c448: Pull complete 
d4f72327bfd0: Pull complete 
61bcbcce7861: Pull complete 
fca783017521: Pull complete 
29ab00ed6801: Pull complete 
Digest: sha256:c8a111e6250a313f1dd5d34696ddbef9068f70ddf4b15ab4c9cefd0ea39b76c1
Status: Downloaded newer image for rancher/rancher-agent:v2.4.8
5d0dab9b2c081057f482025d477b329c7a90464289b9209675755280842813bf

8、rancher管理界面查看狀態(下載的鏡像比較多,耐心等待)

9、同樣的命令,將node節點也加入到集群中,如下:

七、k8s使用kubectl查看節點狀態

1、建立config文件

在/home/yang/.kube的文件夾下,創建config文件

sudo mkdir -m 777 /home/yang/.kube
cd /home/yang/.kube/
sudo touch config

2、點擊Rancher管理界面儀表盤右邊的Kubeconfig File

3、復制里面的內容,粘貼到config文件中。

yang@master:~/.kube$ cat config 
apiVersion: v1
kind: Config
clusters:
- name: "k8s-cluster"
  cluster:
    server: "https://192.168.1.106/k8s/clusters/c-rvp4f"
    certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpRENDQ\
      VM2Z0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQTdNUnd3R2dZRFZRUUtFeE5rZVc1aGJXbGoKY\
      kdsemRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR1Z1WlhJdFkyRXdIa\
      GNOTWpFeApNakV6TURJME16STJXaGNOTXpFeE1qRXhNREkwTXpJMldqQTdNUnd3R2dZRFZRUUtFe\
      E5rZVc1aGJXbGpiR2x6CmRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR\
      1Z1WlhJdFkyRXdXVEFUQmdjcWhrak8KUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVExamNjUDJDRkNiY\
      XVYUEEvZWFqMmlUMmh1SWRoS3NkZmI4REhpYnN2egptMkZ1M1dCRXQ2NlkyMDZTL3BFT2FKTll1Q\
      3lBaytHYjhYZjFITnRqbEhlVG95TXdJVEFPQmdOVkhROEJBZjhFCkJBTUNBcVF3RHdZRFZSMFRBU\
      UgvQkFVd0F3RUIvekFLQmdncWhrak9QUVFEQWdOSUFEQkZBaUEwaUo2a0psSW8KeTNIS0RxN2NkT\
      UgyaEZCRmM1VUdQRk5oZVRYNVBlOU0wQUloQUkzNEZwR0xNeUoxZE5GQnYrNHhTR0kwQVlPUwpmO\
      FlGVVdJQjFBOVB3clo0Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0="
- name: "k8s-cluster-node1"
  cluster:
    server: "https://192.168.1.108:6443"
    certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQ\
      WFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\
      FdOaE1CNFhEVEl4TVRJeE16QTBNREV6TkZvWERUTXhNVEl4TVRBME1ERXpORm93RWpFUU1BNEdBM\
      VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\
      0VCQUxzU2UvMzFFTldECkE1YUljRkJwY0NkUGNQMGlKTWozUU94V2VPQ2pzbDhOU0pXSngrYUlhV\
      1pUZDRmSzIwZHhPTnBjdUFiUXEyUWUKa3NwSExLRTNlUDJqZXpyekZJZndaTFdKTUtUM0t1RkR6c\
      y8zaGhNU1NSczVwTUczYWo1bmVGajFYaTJLa2svWAo4VnVtS1FXQjlmSGRRdDVwclU4ZzBuYUZiQ\
      mw0S0dicUJ4RUNRa0ZDV0hhM1U0RXpTVkpNbnRFRG1ZbDVxeHlFClV6VHRzUUx1dEpEdFBDdzFHb\
      HB4Vndob1VtOXBYQ2pROElFYWsrU0g5c0o0a1JEOUxMSC9sMmkyWGxROUZkUksKaEZ4OUFCcUp4U\
      khSaVc5K1dyTE9wUk1ENFBxL0ZkbCtqcDNNcTVtQ2lPN09ac1pNN1hUcWg1M1FEbFAzbjZDcApEa\
      EtTVmZwMU02RUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\
      UZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2sxRTNQUE5JbENFc3lTRHhMbVFkZ\
      25WUzNRT09oakczWXoKSXRPNVlJRVcxbUlDTlNBVUxjQ1pLaHFLSjRERkVIZEUyV0p1WGhhQmtIM\
      nBQOFljQVpIVktWUHRGZGJRK29aMQpGUktDeXZHU2lQaTlQZ2VITDRCb2FHQ21wb2ZRdDFaaisyQ\
      TFXQ3MvUUV4U3FyTXE2cXF1WXp0L3BKMFJIMFdZCmRMckFxL0NDRHMrbzlOQW4xQW5VYWZtUzB0R\
      0FKb2R4SXZYM0haVVpNSUk5OWZLMDhIcWlKSVEyS0V5bnk3ZWwKUnVIR3Jlc0I4Y0lTR2pOZDg5M\
      W16OEZJTk1QeExoNDFwWURjZjNqTEZ5VS83anZpZjUrbEdiRHlZM01Mb09UOApLVld4UXhSRDVka\
      lF3MVZFZHlKVUVLK0EwMThOenBRM0JZenJDejI2THloZmZPVit3QVE9Ci0tLS0tRU5EIENFUlRJR\
      klDQVRFLS0tLS0K"
- name: "k8s-cluster-node2"
  cluster:
    server: "https://192.168.1.109:6443"
    certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQ\
      WFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\
      FdOaE1CNFhEVEl4TVRJeE16QTBNREV6TkZvWERUTXhNVEl4TVRBME1ERXpORm93RWpFUU1BNEdBM\
      VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\
      0VCQUxzU2UvMzFFTldECkE1YUljRkJwY0NkUGNQMGlKTWozUU94V2VPQ2pzbDhOU0pXSngrYUlhV\
      1pUZDRmSzIwZHhPTnBjdUFiUXEyUWUKa3NwSExLRTNlUDJqZXpyekZJZndaTFdKTUtUM0t1RkR6c\
      y8zaGhNU1NSczVwTUczYWo1bmVGajFYaTJLa2svWAo4VnVtS1FXQjlmSGRRdDVwclU4ZzBuYUZiQ\
      mw0S0dicUJ4RUNRa0ZDV0hhM1U0RXpTVkpNbnRFRG1ZbDVxeHlFClV6VHRzUUx1dEpEdFBDdzFHb\
      HB4Vndob1VtOXBYQ2pROElFYWsrU0g5c0o0a1JEOUxMSC9sMmkyWGxROUZkUksKaEZ4OUFCcUp4U\
      khSaVc5K1dyTE9wUk1ENFBxL0ZkbCtqcDNNcTVtQ2lPN09ac1pNN1hUcWg1M1FEbFAzbjZDcApEa\
      EtTVmZwMU02RUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\
      UZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2sxRTNQUE5JbENFc3lTRHhMbVFkZ\
      25WUzNRT09oakczWXoKSXRPNVlJRVcxbUlDTlNBVUxjQ1pLaHFLSjRERkVIZEUyV0p1WGhhQmtIM\
      nBQOFljQVpIVktWUHRGZGJRK29aMQpGUktDeXZHU2lQaTlQZ2VITDRCb2FHQ21wb2ZRdDFaaisyQ\
      TFXQ3MvUUV4U3FyTXE2cXF1WXp0L3BKMFJIMFdZCmRMckFxL0NDRHMrbzlOQW4xQW5VYWZtUzB0R\
      0FKb2R4SXZYM0haVVpNSUk5OWZLMDhIcWlKSVEyS0V5bnk3ZWwKUnVIR3Jlc0I4Y0lTR2pOZDg5M\
      W16OEZJTk1QeExoNDFwWURjZjNqTEZ5VS83anZpZjUrbEdiRHlZM01Mb09UOApLVld4UXhSRDVka\
      lF3MVZFZHlKVUVLK0EwMThOenBRM0JZenJDejI2THloZmZPVit3QVE9Ci0tLS0tRU5EIENFUlRJR\
      klDQVRFLS0tLS0K"
- name: "k8s-cluster-master"
  cluster:
    server: "https://192.168.1.106:6443"
    certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQ\
      WFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKT\
      FdOaE1CNFhEVEl4TVRJeE16QTBNREV6TkZvWERUTXhNVEl4TVRBME1ERXpORm93RWpFUU1BNEdBM\
      VVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ\
      0VCQUxzU2UvMzFFTldECkE1YUljRkJwY0NkUGNQMGlKTWozUU94V2VPQ2pzbDhOU0pXSngrYUlhV\
      1pUZDRmSzIwZHhPTnBjdUFiUXEyUWUKa3NwSExLRTNlUDJqZXpyekZJZndaTFdKTUtUM0t1RkR6c\
      y8zaGhNU1NSczVwTUczYWo1bmVGajFYaTJLa2svWAo4VnVtS1FXQjlmSGRRdDVwclU4ZzBuYUZiQ\
      mw0S0dicUJ4RUNRa0ZDV0hhM1U0RXpTVkpNbnRFRG1ZbDVxeHlFClV6VHRzUUx1dEpEdFBDdzFHb\
      HB4Vndob1VtOXBYQ2pROElFYWsrU0g5c0o0a1JEOUxMSC9sMmkyWGxROUZkUksKaEZ4OUFCcUp4U\
      khSaVc5K1dyTE9wUk1ENFBxL0ZkbCtqcDNNcTVtQ2lPN09ac1pNN1hUcWg1M1FEbFAzbjZDcApEa\
      EtTVmZwMU02RUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93U\
      UZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2sxRTNQUE5JbENFc3lTRHhMbVFkZ\
      25WUzNRT09oakczWXoKSXRPNVlJRVcxbUlDTlNBVUxjQ1pLaHFLSjRERkVIZEUyV0p1WGhhQmtIM\
      nBQOFljQVpIVktWUHRGZGJRK29aMQpGUktDeXZHU2lQaTlQZ2VITDRCb2FHQ21wb2ZRdDFaaisyQ\
      TFXQ3MvUUV4U3FyTXE2cXF1WXp0L3BKMFJIMFdZCmRMckFxL0NDRHMrbzlOQW4xQW5VYWZtUzB0R\
      0FKb2R4SXZYM0haVVpNSUk5OWZLMDhIcWlKSVEyS0V5bnk3ZWwKUnVIR3Jlc0I4Y0lTR2pOZDg5M\
      W16OEZJTk1QeExoNDFwWURjZjNqTEZ5VS83anZpZjUrbEdiRHlZM01Mb09UOApLVld4UXhSRDVka\
      lF3MVZFZHlKVUVLK0EwMThOenBRM0JZenJDejI2THloZmZPVit3QVE9Ci0tLS0tRU5EIENFUlRJR\
      klDQVRFLS0tLS0K"

users:
- name: "k8s-cluster"
  user:
    token: "kubeconfig-user-trn62.c-rvp4f:p82f2nfbxnzqllvls9rpfmtxk8dkcnjjgm8rsl5nvq978gms5twpd8"


contexts:
- name: "k8s-cluster"
  context:
    user: "k8s-cluster"
    cluster: "k8s-cluster"
- name: "k8s-cluster-node1"
  context:
    user: "k8s-cluster"
    cluster: "k8s-cluster-node1"
- name: "k8s-cluster-node2"
  context:
    user: "k8s-cluster"
    cluster: "k8s-cluster-node2"
- name: "k8s-cluster-master"
  context:
    user: "k8s-cluster"
    cluster: "k8s-cluster-master"

current-context: "k8s-cluster"

4、讓kubectl能辨識到~/.kube/config

export KUBECONFIG=/home/rancher/.kube/config

5、確認kubectl有沒有抓到nodes

kubectl cluster-info

6、查看node節點狀態

yang@master:~$ kubectl get node
NAME     STATUS   ROLES                         AGE     VERSION
master   Ready    control-plane,etcd,worker      4d1h     v1.18.8
node1    Ready    control-plane,etcd,worker       4d      v1.18.8
node2    Ready    control-plane,etcd,worker       4d      v1.18.8

 

至此rancher 基於k8s集群配置完成,即可部署服務!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM