kubernetes配置后端存儲 rook-ceph


一 Rook概述

1.1 Ceph簡介

Ceph是一種高度可擴展的分布式存儲解決方案,提供對象、文件和塊存儲。在每個存儲節點上,將找到Ceph存儲對象的文件系統和Ceph OSD(對象存儲守護程序)進程。在Ceph集群上,還存在Ceph MON(監控)守護程序,它們確保Ceph集群保持高可用性。
更多Ceph介紹參考:https://www.cnblogs.com/itzgr/category/1382602.html

1.2 Rook簡介

Rook 是一個開源的cloud-native storage編排, 提供平台和框架;為各種存儲解決方案提供平台、框架和支持,以便與雲原生環境本地集成。目前主要專用於Cloud-Native環境的文件、塊、對象存儲服務。它實現了一個自我管理的、自我擴容的、自我修復的分布式存儲服務。
Rook支持自動部署、啟動、配置、分配(provisioning)、擴容/縮容、升級、遷移、災難恢復、監控,以及資源管理。為了實現所有這些功能,Rook依賴底層的容器編排平台,例如 kubernetes、CoreOS 等。。
Rook 目前支持Ceph、NFS、Minio Object Store、Edegefs、Cassandra、CockroachDB 存儲的搭建。
Rook機制:
Rook 提供了卷插件,來擴展了 K8S 的存儲系統,使用 Kubelet 代理程序 Pod 可以掛載 Rook 管理的塊設備和文件系統。
Rook Operator 負責啟動並監控整個底層存儲系統,例如 Ceph Pod、Ceph OSD 等,同時它還管理 CRD、對象存儲、文件系統。
Rook Agent 代理部署在 K8S 每個節點上以 Pod 容器運行,每個代理 Pod 都配置一個 Flexvolume 驅動,該驅動主要用來跟 K8S 的卷控制框架集成起來,每個節點上的相關的操作,例如添加存儲設備、掛載、格式化、刪除存儲等操作,都有該代理來完成。
更多參考如下官網:
https://rook.io
https://ceph.com/

二 Rook部署

2.1 前期規划

請自動創建好kubernetes集群
集群版本 v1.21.5
內核要求
RBD

lsmod|grep rbd

CephFS
如果你想使用cephfs,內核最低要求是4.17。
磁盤 sdb
主機內核 5.4.182-1.el7.elrepo.x86_64
集群節點 estarhaohao-centos7-master01 estarhaohao-centos7-master02 estarhaohao-centos7-master03
用到的所有文件請自行到我的gitee倉庫拉取https://gitee.com/estarhaohao/rook.git

2.2 獲取YAML

[root@k8smaster01 ~]# git clone https://gitee.com/estarhaohao/rook.git

2.3 配置節點標簽

[root@estarhaohao-centos7-master01 ~]# kubectl label nodes  {estarhaohao-centos7-master01,estarhaohao-centos7-master02,estarhaohao-centos7-master03} app.rook.role=csi-provisioner app.rook.plugin=csi app.rook=storage ceph-mon=enabled ceph-osd=enabled ceph-mgr=enabled

2.4 部署Rook Operator

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f common.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f crds.yaml  #創建資源
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f operator.yaml
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-56496b9f8f-dblnq   1/1     Running   0          3m37s
rook-discover-2jp7z                   1/1     Running   0          2m53s
rook-discover-hqq27                   1/1     Running   0          2m53s
rook-discover-sx8c6                   1/1     Running   0          2m53s

創建成功后創建cluster

2.5 創建cluster

[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-mwfg8                                            3/3     Running     0          5m53s
csi-cephfsplugin-provisioner-6446d9c9df-4r5xq                     6/6     Running     0          5m52s
csi-cephfsplugin-provisioner-6446d9c9df-rkd4k                     6/6     Running     0          5m52s
csi-cephfsplugin-vrlwm                                            3/3     Running     0          5m53s
csi-cephfsplugin-xfm8n                                            3/3     Running     0          5m53s
csi-rbdplugin-d87pk                                               3/3     Running     0          5m54s
csi-rbdplugin-k292p                                               3/3     Running     0          5m54s
csi-rbdplugin-provisioner-6998bd5986-j7729                        6/6     Running     0          5m53s
csi-rbdplugin-provisioner-6998bd5986-rp2wk                        6/6     Running     0          5m53s
csi-rbdplugin-r56c2                                               3/3     Running     0          5m54s
rook-ceph-crashcollector-estarhaohao-centos7-master01-564fhkv28   1/1     Running     0          4m7s
rook-ceph-crashcollector-estarhaohao-centos7-master02-547djvsw2   1/1     Running     0          3m18s
rook-ceph-crashcollector-estarhaohao-centos7-master03-787cdjq4b   1/1     Running     0          4m20s
rook-ceph-mgr-a-5bbf8f48d7-pdgkt                                  1/1     Running     0          3m51s
rook-ceph-mon-a-77d85f8944-56cgc                                  1/1     Running     0          5m59s
rook-ceph-mon-b-76d6564885-vxxhd                                  1/1     Running     0          5m30s
rook-ceph-mon-c-85858494c5-xjpf9                                  1/1     Running     0          4m7s
rook-ceph-operator-56496b9f8f-dblnq                               1/1     Running     0          9m53s
rook-ceph-osd-0-5c4f45d76-n6qc6                                   1/1     Running     0          3m24s
rook-ceph-osd-1-7f7f575577-v7lg5                                  1/1     Running     0          3m21s
rook-ceph-osd-2-5677f9d654-wzzzq                                  1/1     Running     0          3m18s
rook-ceph-osd-prepare-estarhaohao-centos7-master01-fvxq9          0/1     Completed   0          3m47s
rook-ceph-osd-prepare-estarhaohao-centos7-master02-x7swq          0/1     Completed   0          3m46s
rook-ceph-osd-prepare-estarhaohao-centos7-master03-9vhfc          0/1     Completed   0          3m45s
rook-discover-2jp7z                                               1/1     Running     0          9m9s
rook-discover-hqq27                                               1/1     Running     0          9m9s
rook-discover-sx8c6                                               1/1     Running     0          9m9s

提示:若部署失敗,master節點執行[root@k8smaster01 ceph]# kubectl delete -f ./
所有node節點執行如下清理操作:
rm -rf /var/lib/rook
/dev/mapper/ceph-*
dmsetup ls
dmsetup remove_all
dd if=/dev/zero of=/dev/sdb bs=512k count=1
wipefs -af /dev/sdb

2.6 部署Toolbox

toolbox是一個rook的工具集容器,該容器中的命令可以用來調試、測試Rook,對Ceph臨時測試的操作一般在這個容器內執行。

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f toolbox.yaml 
rook-ceph-tools-8574b74c5d-65x8r  1/1     Running     0          4s

2.7 測試rook-ceph

可以添加別名 就不用這樣寫這么多命令了
[root@estarhaohao-centos7-master01 ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph -s 
  cluster:
    id:     2fb51620-1a29-4d64-9ad9-616e6435924a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 28m)
    mgr: a(active, since 27m)
    mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 27m), 3 in (since 27m)

  data:
    pools:   4 pools, 97 pgs
    objects: 30 objects, 49 KiB
    usage:   3.0 GiB used, 897 GiB / 900 GiB avail
    pgs:     97 active+clean

  io:
    client:   852 B/s rd, 1 op/s rd, 0 op/s wr
[root@estarhaohao-centos7-master01 ~]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME                              STATUS  REWEIGHT  PRI-AFF
-1         0.87900  root default
-3         0.29300      host estarhaohao-centos7-master01
 0    hdd  0.29300          osd.0                              up   1.00000  1.00000
-7         0.29300      host estarhaohao-centos7-master02
 2    hdd  0.29300          osd.2                              up   1.00000  1.00000
-5         0.29300      host estarhaohao-centos7-master03
 1    hdd  0.29300          osd.1                              up   1.00000  1.00000
到這基本沒什么問題了

三 Ceph 塊存儲

3.1 創建StorageClass

在提供(Provisioning)塊存儲之前,需要先創建StorageClass和存儲池。K8S需要這兩類資源,才能和Rook交互,進而分配持久卷(PV)。
解讀:如下配置文件中會創建一個名為replicapool的存儲池,和rook-ceph-block的storageClass。

[root@estarhaohao-centos7-master01 rbd]# pwd
/opt/rook/cluster/examples/kubernetes/ceph/csi/rbd
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f storageclass.yaml
[root@estarhaohao-centos7-master01 rbd]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   64m

3.2 測試rbd

[root@estarhaohao-centos7-master01 rbd]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   4s
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f pod.yaml
pod/csirbd-demo-pod created
[root@estarhaohao-centos7-master01 rbd]# kubectl apply -f pvc.yaml
persistentvolumeclaim/rbd-pvc created
[root@estarhaohao-centos7-master01 rbd]# kubectl get pvc rbd-pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc   Bound    pvc-9f69bfab-a81b-41ea-93c7-59966661c867   1Gi        RWO            rook-ceph-block   5s
[root@estarhaohao-centos7-master01 rbd]# kubectl get pod csirbd-demo-pod
NAME              READY   STATUS    RESTARTS   AGE
csirbd-demo-pod   1/1     Running   0          70s

running狀態基本沒問題了

四 Ceph 文件存儲

4.1 創建CephFilesystem

默認Ceph未部署對CephFS的支持,使用如下官方提供的默認yaml可部署文件存儲的filesystem。

[root@estarhaohao-centos7-master01 ceph]# pwd
/opt/rook/cluster/examples/kubernetes/ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f filesystem.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl get cephfilesystems.ceph.rook.io -n rook-ceph
NAME   ACTIVEMDS   AGE
myfs   1           55m

4.2 創建cephfs storageclass

使用如下官方提供的默認yaml可部署文件存儲的StorageClass。

[root@estarhaohao-centos7-master01 cephfs]# pwd
/opt/rook/cluster/examples/kubernetes/ceph/csi/cephfs
[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f storageclass.yaml 
[root@estarhaohao-centos7-master01 cephfs]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   70m
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   56m

4.3 測試cephfs

[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f pvc.yaml
[root@estarhaohao-centos7-master01 cephfs]# kubectl apply -f pod.yaml
[root@estarhaohao-centos7-master01 cephfs]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-e0d04036-a37c-4544-b71f-ac53f79c7832   1Gi        RWO            rook-cephfs    57m
[root@estarhaohao-centos7-master01 cephfs]# kubectl get pod
NAME                 READY   STATUS    RESTARTS   AGE
csicephfs-demo-pod   1/1     Running   0          57m
 

cephfs基本沒問題了

五 Ceph 對象存儲

5.1 創建CephObjectStore

在提供(object)對象存儲之前,需要先創建相應的支持,使用如下官方提供的默認yaml可部署對象存儲的CephObjectStore。

[root@estarhaohao-centos7-master01 ceph]# pwd
/opt/rook/cluster/examples/kubernetes/ceph
[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f object.yaml
[root@estarhaohao-centos7-master01 ceph]# kubectl get pod -n rook-ceph | grep rgw
rook-ceph-rgw-my-store-a-57dd44d5b-lkgfw                          1/1     Running     0          2m51s

5.2 創建StorageClass
使用如下官方提供的默認yaml可部署對象存儲的StorageClass。

[root@estarhaohao-centos7-master01 ceph]# kubectl apply -f storageclass-bucket-delete.yaml
storageclass.storage.k8s.io/rook-ceph-delete-bucket created
[root@estarhaohao-centos7-master01 ceph]# kubectl get sc
NAME                      PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block           rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   85m
rook-ceph-delete-bucket   rook-ceph.ceph.rook.io/bucket   Delete          Immediate           false                  5s
rook-cephfs               rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   72m

5.3 創建bucket

使用如下官方提供的默認yaml可部署對象存儲的bucket。
[root@k8smaster01 ceph]# kubectl create -f object-bucket-claim-delete.yaml
待定。。。。。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM