在k8s高可用集群上用ROOK部署ceph集群並創建PVC


Rook概述

Ceph簡介
Ceph是一種高度可擴展的分布式存儲解決方案,提供對象、文件和塊存儲。在每個存儲節點上,將找到Ceph存儲對象的文件系統和Ceph OSD(對象存儲守護程序)進程。在Ceph集群上,還存在Ceph MON(監控)守護程序,它們確保Ceph集群保持高可用性。

Rook簡介
Rook 是一個開源的cloud-native storage編排, 提供平台和框架;為各種存儲解決方案提供平台、框架和支持,以便與雲原生環境本地集成。目前主要專用於Cloud-Native環境的文件、塊、對象存儲服務。它實現了一個自我管理的、自我擴容的、自我修復的分布式存儲服務。
Rook支持自動部署、啟動、配置、分配(provisioning)、擴容/縮容、升級、遷移、災難恢復、監控,以及資源管理。為了實現所有這些功能,Rook依賴底層的容器編排平台,例如 kubernetes、CoreOS 等。。
Rook 目前支持Ceph、NFS、Minio Object Store、Edegefs、Cassandra、CockroachDB 存儲的搭建。
Rook機制:
Rook 提供了卷插件,來擴展了 K8S 的存儲系統,使用 Kubelet 代理程序 Pod 可以掛載 Rook 管理的塊設備和文件系統。
Rook Operator 負責啟動並監控整個底層存儲系統,例如 Ceph Pod、Ceph OSD 等,同時它還管理 CRD、對象存儲、文件系統。
Rook Agent 代理部署在 K8S 每個節點上以 Pod 容器運行,每個代理 Pod 都配置一個 Flexvolume 驅動,該驅動主要用來跟 K8S 的卷控制框架集成起來,每個節點上的相關的操作,例如添加存儲設備、掛載、格式化、刪除存儲等操作,都有該代理來完成。

Rook架構

1、三台node節點分別添加一塊磁盤並輸入以下命令識別磁盤(不用重啟)

echo "- - -" >/sys/class/scsi_host/host0/scan
echo "- - -" >/sys/class/scsi_host/host1/scan
echo "- - -" >/sys/class/scsi_host/host2/scan

2、克隆 ceph

git clone https://github.com/rook/rook.git

3、切換到需要的版本分支

cd rook
git branch -a
git checkout -b release-1.1 remotes/origin/release-1.1

4、使用node節點存儲,在master1上需要修改參數

kubectl label nodes {node1,node2,node3} ceph-osd=enabled
kubectl label nodes {node1,node2,node3} ceph-mon=enabled
kubectl label nodes node1 ceph-mgr=enabled

5、進入項目安裝 common.yaml 和 operator.yaml

[root@master1 ceph]# kubectl apply -f common.yaml
namespace/rook-ceph created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt-rules created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global-rules created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster-rules created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system-rules created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
podsecuritypolicy.policy/rook-privileged created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin-rules created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner-rules created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin-rules created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner-rules created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
[root@master1 ceph]# kubectl apply -f operator.yaml
deployment.apps/rook-ceph-operator created

7、查看 roo-ceph 命名空間下 pod 狀態

[root@master1 ceph]# kubectl get po -n rook-ceph
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-77c5668c9b-k7ml6   1/1     Running   0          12s
rook-discover-dmd2m                   1/1     Running   0          8s
rook-discover-n48st                   1/1     Running   0          8s
rook-discover-sgf7n                   1/1     Running   0          8s
rook-discover-vb2mw                   1/1     Running   0          8s
rook-discover-wzhkd                   1/1     Running   0          8s
rook-discover-zsnp8                   1/1     Running   0          8s

8、配置cluster.yaml

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v14.2.4-20190917
    allowUnsupported: false
  dataDirHostPath: /var/lib/rook
  skipUpgradeChecks: false
  mon:
    count: 3
    allowMultiplePerNode: false
  dashboard:
    enabled: true
    ssl: true
  monitoring:
    enabled: false
    rulesNamespace: rook-ceph
  network:
    hostNetwork: false
  rbdMirroring:
    workers: 0
  placement:
 #    all:
 #      nodeAffinity:
 #        requiredDuringSchedulingIgnoredDuringExecution:
 #          nodeSelectorTerms:
 #          - matchExpressions:
 #            - key: role
 #              operator: In
 #              values:
 #              - storage-node
 #      podAffinity:
 #      podAntiAffinity:
 #      tolerations:
 #      - key: storage-node
 #        operator: Exists
    mon:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mon
              operator: In
              values:
              - enabled
      tolerations:
      - key: ceph-mon
        operator: Exists
    ods:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-osd
              operator: In
              values:
              - enabled
      tolerations:
      - key: ceph-osd
        operator: Exists
    mgr:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mgr
              operator: In
              values:
              - enabled
      tolerations:
      - key: ceph-mgr
        operator: Exists   
  annotations:
  resources:
  removeOSDsIfOutAndSafeToRemove: false
  storage: 
    useAllNodes: false                                     #關閉使用所有Node
    useAllDevices: false                                   #關閉使用所有設備
    deviceFilter: sdb
    config:
        metadataDevice: 
        databaseSizeMB: "1024" 
        journalSizeMB: "1024"  
    nodes:
    - name: "node1"                                        #指定存儲節點主機
      config:
        storeType: bluestore                               #指定類型為裸磁盤
      devices:                                             #指定磁盤為sdb
      - name: "sdb"
    - name: "node2"
      config:
        storeType: bluestore
      devices: 
      - name: "sdb"
    - name: "node3"
      config:
        storeType: bluestore
      devices:
      - name: "sdb"
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

9、安裝cluster.yaml

[root@master1 ceph]# kubectl apply -f cluster.yaml
cephcluster.ceph.rook.io/rook-ceph created

10、查看狀態

[root@master1 ceph]# kubectl get po -n rook-ceph
NAME                                            READY   STATUS    RESTARTS   AGE
csi-cephfsplugin-9qw89                          3/3     Running   0          2m53s
csi-cephfsplugin-j9c5w                          3/3     Running   0          2m53s
csi-cephfsplugin-kln7b                          3/3     Running   0          2m53s
csi-cephfsplugin-lpzz5                          3/3     Running   0          2m53s
csi-cephfsplugin-n6xqw                          3/3     Running   0          2m53s
csi-cephfsplugin-provisioner-5f5fb76db9-cbgtq   4/4     Running   0          2m52s
csi-cephfsplugin-provisioner-5f5fb76db9-jb5s5   4/4     Running   0          2m52s
csi-cephfsplugin-tmpqd                          3/3     Running   0          2m53s
csi-rbdplugin-2cdt6                             3/3     Running   0          2m54s
csi-rbdplugin-48l7q                             3/3     Running   0          2m54s
csi-rbdplugin-c7zmx                             3/3     Running   0          2m54s
csi-rbdplugin-cjtt6                             3/3     Running   0          2m54s
csi-rbdplugin-cqjgw                             3/3     Running   0          2m54s
csi-rbdplugin-ljhzn                             3/3     Running   0          2m54s
csi-rbdplugin-provisioner-8c5468854-292p8       5/5     Running   0          2m54s
csi-rbdplugin-provisioner-8c5468854-qqczh       5/5     Running   0          2m54s
rook-ceph-mgr-a-854dd44d4-g848k                 1/1     Running   0          13s
rook-ceph-mon-a-7c77f495cb-bqsnm                1/1     Running   0          73s
rook-ceph-mon-b-78f7974649-8k854                1/1     Running   0          63s
rook-ceph-mon-c-f8b59c975-27qhj                 1/1     Running   0          38s
rook-ceph-operator-77c5668c9b-k7ml6             1/1     Running   0          5m27s
rook-discover-dmd2m                             1/1     Running   0          5m23s
rook-discover-n48st                             1/1     Running   0          5m23s
rook-discover-sgf7n                             1/1     Running   0          5m23s
rook-discover-vb2mw                             1/1     Running   0          5m23s
rook-discover-wzhkd                             1/1     Running   0          5m23s
rook-discover-zsnp8                             1/1     Running   0          5m23s

11、安裝Toolbox
toolbox是一個rook的工具集容器,該容器中的命令可以用來調試、測試Rook,對Ceph臨時測試的操作一般在這個容器內執行。

kubectl apply -f toolbox.yaml

[root@master1 ceph]# kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-tools-6bd79cf569-2lthp   1/1     Running   0          76s

12、測試Rook

[root@master1 ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
sh-4.2# ceph status
  cluster:
    id:     ee4579e8-6f9b-48f0-8175-182a15009bf8
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 2m)
    mgr: a(active, since 77s)
    osd: 3 osds: 3 up (since 3s), 3 in (since 3s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 24 GiB / 27 GiB avail
    pgs:

sh-4.2# ceph osd status
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| id |  host |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | node1 | 1025M | 8190M |    0   |     0   |    0   |     0   | exists,up |
| 1  | node2 | 1025M | 8190M |    0   |     0   |    0   |     0   | exists,up |
| 2  | node3 | 1025M | 8190M |    0   |     0   |    0   |     0   | exists,up |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
sh-4.2# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR

total_objects    0
total_used       3.0 GiB
total_avail      24 GiB
total_space      27 GiB
sh-4.2# ceph auth ls
installed auth entries:

osd.0
        key: AQBDv3lhAPS/AhAAGzigeHwB9BFsWQuiYXTXPQ==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.1
        key: AQBEv3lh/WaMDBAANZJAS8lR/FVFzA7MUSzi5w==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.2
        key: AQBDv3lhm2L0LRAAiU1O47F/neFUgl4pPutN+w==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQB6vnlha/CuIBAAsfNW7cEoWWpkOMwCXpQM5g==
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQDFvnlhD8UqLBAANV2pKTeNekZz6yoxKc9uVw==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
        key: AQDFvnlho9QqLBAAtwrmj6pQDIOhMsNFaP+0cw==
        caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
        key: AQDFvnlhbeMqLBAA60D0vtUjKk5EqKCzlFc+mA==
        caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
        key: AQDFvnlh0/EqLBAAf3EUyHtPuGm8N3m7UirYUQ==
        caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
        key: AQDFvnlhfwArLBAA78d4dVfD0crjJspUzJ+UNw==
        caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
        key: AQDFvnlh3Q8rLBAAMn2rh9avCJS/r1T0x804GQ==
        caps: [mon] allow profile bootstrap-rgw
client.csi-cephfs-node
        key: AQD5vnlhx2FOLRAA+231/pqp4TK0iQtFT/6aXw==
        caps: [mds] allow rw
        caps: [mgr] allow rw
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
        key: AQD5vnlhKiTiDhAA+1lhupYiXT7JeMdSkcPS1g==
        caps: [mgr] allow rw
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs metadata=*
client.csi-rbd-node
        key: AQD4vnlhgEexLRAASgO09KpvF9/OKb7K1yxS4g==
        caps: [mon] profile rbd
        caps: [osd] profile rbd
client.csi-rbd-provisioner
        key: AQD4vnlhfwy5FBAALyObdIuo4wyhiRZDrtNasw==
        caps: [mgr] allow rw
        caps: [mon] profile rbd
        caps: [osd] profile rbd
mgr.a
        key: AQD7vnlhvto/IBAAVgyCHEt57APBeLuqDrelYA==
        caps: [mds] allow *
        caps: [mon] allow *
        caps: [osd] allow *

13、部署 Dashboard (需要修改暴露方式,把 ClusterIP 改成 NodePort,我這里已經改好了。)

[root@master1 ceph]# cat dashboard-external-https.yaml
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-mgr-dashboard-external-https
  namespace: rook-ceph
  labels:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
spec:
  ports:
  - name: dashboard
    port: 8443
    protocol: TCP
    targetPort: 8443
  selector:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: NodePort                 # 暴露方式

[root@master1 ceph]# kubectl apply -f  dashboard-external-https.yaml
service/rook-ceph-mgr-dashboard-external-https created

14、查看 dashboard 暴露端口

[root@master1 ceph]# kubectl get svc -n rook-ceph
NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics                 ClusterIP   10.100.92.5      <none>        8080/TCP,8081/TCP   14m
csi-rbdplugin-metrics                    ClusterIP   10.107.1.61      <none>        8080/TCP,8081/TCP   14m
rook-ceph-mgr                            ClusterIP   10.103.103.155   <none>        9283/TCP            10m
rook-ceph-mgr-dashboard                  ClusterIP   10.96.81.222     <none>        8443/TCP            10m
rook-ceph-mgr-dashboard-external-https   NodePort    10.100.215.186   <none>        8443:32741/TCP      105s      #   暴露端口
rook-ceph-mon-a                          ClusterIP   10.109.2.220     <none>        6789/TCP,3300/TCP   12m
rook-ceph-mon-b                          ClusterIP   10.104.126.29    <none>        6789/TCP,3300/TCP   12m
rook-ceph-mon-c                          ClusterIP   10.100.86.71     <none>        6789/TCP,3300/TCP   11m

15、查看登錄密碼

[root@master1 ceph]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath='{.data.password}'  |  base64 --decode
nqSw8XnSWU

訪問 dashboard (注意:https 訪問,不是 http 訪問)
https://ip:32741 (用戶名:admin 密碼:nqSw8XnSWU)

Ceph塊存儲

11.創建StorageClass
在提供(Provisioning)塊存儲之前,需要先創建StorageClass和存儲池。K8S需要這兩類資源,才能和Rook交互,進而分配持久卷(PV)。
詳解:如下配置文件中會創建一個名為replicapool的存儲池,和rook-ceph-block的storageClass。

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    clusterID: rook-ceph
    pool: replicapool
    imageFormat: "2"
    imageFeatures: layering
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete

12.安裝storageclass.yaml
kubectl apply -f storageclass.yaml

13.創建PVC
詳解:這里創建相應的PVC,storageClassName:為基於rook Ceph集群的rook-ceph-block。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Mi

14.安裝pvc.yaml

kubectl apply -f pvc.yaml


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM