附013.Kubernetes永久存儲Rook部署


一 Rook概述

1.1 Ceph簡介

Ceph是一種高度可擴展的分布式存儲解決方案,提供對象、文件和塊存儲。在每個存儲節點上,將找到Ceph存儲對象的文件系統和Ceph OSD(對象存儲守護程序)進程。在Ceph集群上,還存在Ceph MON(監控)守護程序,它們確保Ceph集群保持高可用性。
更多Ceph介紹參考:https://www.cnblogs.com/itzgr/category/1382602.html

1.2 Rook簡介

Rook 是一個開源的cloud-native storage編排, 提供平台和框架;為各種存儲解決方案提供平台、框架和支持,以便與雲原生環境本地集成。目前主要專用於Cloud-Native環境的文件、塊、對象存儲服務。它實現了一個自我管理的、自我擴容的、自我修復的分布式存儲服務。
Rook支持自動部署、啟動、配置、分配(provisioning)、擴容/縮容、升級、遷移、災難恢復、監控,以及資源管理。為了實現所有這些功能,Rook依賴底層的容器編排平台,例如 kubernetes、CoreOS 等。。
Rook 目前支持Ceph、NFS、Minio Object Store、Edegefs、Cassandra、CockroachDB 存儲的搭建。
Rook機制:
  • Rook 提供了卷插件,來擴展了 K8S 的存儲系統,使用 Kubelet 代理程序 Pod 可以掛載 Rook 管理的塊設備和文件系統。
  • Rook Operator 負責啟動並監控整個底層存儲系統,例如 Ceph Pod、Ceph OSD 等,同時它還管理 CRD、對象存儲、文件系統。
  • Rook Agent 代理部署在 K8S 每個節點上以 Pod 容器運行,每個代理 Pod 都配置一個 Flexvolume 驅動,該驅動主要用來跟 K8S 的卷控制框架集成起來,每個節點上的相關的操作,例如添加存儲設備、掛載、格式化、刪除存儲等操作,都有該代理來完成。
更多參考如下官網:
https://rook.io
https://ceph.com/

1.3 Rook架構

Rook架構如下:
clipboard
Kubernetes集成Rook架構如下:
clipboard

二 Rook部署

2.1 前期規划

提示:本實驗不涉及Kubernetes部署,Kubernetes部署參考《附012.Kubeadm部署高可用Kubernetes》。
主機
IP
磁盤
備注
k8smaster01
172.24.8.71
Kubernetes master節點
k8smaster02
172.24.8.72
Kubernetes master節點
k8smaster03
172.24.8.73
Kubernetes master節點
k8snode01
172.24.8.74
sdb
Kubernetes node節點
Ceph節點
k8snode02
172.24.8.75
sdb
Kubernetes node節點
Ceph節點
k8snode03
172.24.8.76
sdb
Kubernetes node節點
Ceph節點
裸磁盤規划
k8snode01
k8snode02
k8snode03
Disk
sdb
sdb
sdb

2.2 獲取YAML

[root@k8smaster01 ~]# git clone https://github.com/rook/rook.git

2.3 部署Rook Operator

本實驗使用k8snode01——k8snode03三個節點,因此需要如下修改:
[root@k8smaster01 ceph]# kubectl taint node k8smaster01 node-role.kubernetes.io/master="":NoSchedule
[root@k8smaster01 ceph]# kubectl taint node k8smaster02 node-role.kubernetes.io/master="":NoSchedule
[root@k8smaster01 ceph]# kubectl taint node k8smaster03 node-role.kubernetes.io/master="":NoSchedule
[root@k8smaster01 ceph]# kubectl label nodes {k8snode01,k8snode02,k8snode03} ceph-osd=enabled
[root@k8smaster01 ceph]# kubectl label nodes {k8snode01,k8snode02,k8snode03} ceph-mon=enabled
[root@k8smaster01 ceph]# kubectl label nodes k8snode01 ceph-mgr=enabled
提示:當前版本rook中mgr只能支持一個節點運行。

[root@k8smaster01 ~]# cd /root/rook/cluster/examples/kubernetes/ceph/
[root@k8smaster01 ceph]# kubectl create -f common.yaml
[root@k8smaster01 ceph]# kubectl create -f operator.yaml
解讀:如上創建了相應的基礎服務(如serviceaccounts),同時rook-ceph-operator會在每個節點創建 rook-ceph-agent 和 rook-discover。

2.4 配置cluster

[root@k8smaster01 ceph]# vi cluster.yaml
  1 apiVersion: ceph.rook.io/v1
  2 kind: CephCluster
  3 metadata:
  4   name: rook-ceph
  5   namespace: rook-ceph
  6 spec:
  7   cephVersion:
  8     image: ceph/ceph:v14.2.4-20190917
  9     allowUnsupported: false
 10   dataDirHostPath: /var/lib/rook
 11   skipUpgradeChecks: false
 12   mon:
 13     count: 3
 14     allowMultiplePerNode: false
 15   dashboard:
 16     enabled: true
 17     ssl: true
 18   monitoring:
 19     enabled: false
 20     rulesNamespace: rook-ceph
 21   network:
 22     hostNetwork: false
 23   rbdMirroring:
 24     workers: 0
 25   placement:				#配置特定節點親和力保證Node作為存儲節點
 26 #    all:
 27 #      nodeAffinity:
 28 #        requiredDuringSchedulingIgnoredDuringExecution:
 29 #          nodeSelectorTerms:
 30 #          - matchExpressions:
 31 #            - key: role
 32 #              operator: In
 33 #              values:
 34 #              - storage-node
 35 #      tolerations:
 36 #      - key: storage-node
 37 #        operator: Exists
 38     mon:
 39       nodeAffinity:
 40         requiredDuringSchedulingIgnoredDuringExecution:
 41           nodeSelectorTerms:
 42           - matchExpressions:
 43             - key: ceph-mon
 44               operator: In
 45               values:
 46               - enabled
 47       tolerations:
 48       - key: ceph-mon
 49         operator: Exists
 50     ods:
 51       nodeAffinity:
 52         requiredDuringSchedulingIgnoredDuringExecution:
 53           nodeSelectorTerms:
 54           - matchExpressions:
 55             - key: ceph-osd
 56               operator: In
 57               values:
 58               - enabled
 59       tolerations:
 60       - key: ceph-osd
 61         operator: Exists
 62     mgr:
 63       nodeAffinity:
 64         requiredDuringSchedulingIgnoredDuringExecution:
 65           nodeSelectorTerms:
 66           - matchExpressions:
 67             - key: ceph-mgr
 68               operator: In
 69               values:
 70               - enabled
 71       tolerations:
 72       - key: ceph-mgr
 73         operator: Exists
 74   annotations:
 75   resources:
 76   removeOSDsIfOutAndSafeToRemove: false
 77   storage:
 78     useAllNodes: false			#關閉使用所有Node
 79     useAllDevices: false		#關閉使用所有設備
 80     deviceFilter: sdb
 81     config:
 82         metadataDevice:
 83         databaseSizeMB: "1024"
 84         journalSizeMB: "1024"
 85     nodes:
 86     - name: "k8snode01"			#指定存儲節點主機
 87       config:
 88         storeType: bluestore	#指定類型為裸磁盤
 89       devices:
 90       - name: "sdb"			    #指定磁盤為sdb
 91     - name: "k8snode02"
 92       config:
 93         storeType: bluestore
 94       devices:
 95       - name: "sdb"
 96     - name: "k8snode03"
 97       config:
 98         storeType: bluestore
 99       devices:
100       - name: "sdb"
101   disruptionManagement:
102     managePodBudgets: false
103     osdMaintenanceTimeout: 30
104     manageMachineDisruptionBudgets: false
105     machineDisruptionBudgetNamespace: openshift-machine-api
提示:更多cluster的CRD配置參考:https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md。
https://blog.gmem.cc/rook-based-k8s-storage-solution

2.5 獲取鏡像

可能由於國內環境無法Pull鏡像,建議提前pull如下鏡像:
docker pull rook/ceph:master
docker pull quay.azk8s.cn/cephcsi/cephcsi:v1.2.2
docker pull quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0
docker pull quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0
docker pull quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0
docker pull quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2

docker tag quay.azk8s.cn/cephcsi/cephcsi:v1.2.2 quay.io/cephcsi/cephcsi:v1.2.2
docker tag quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
docker tag quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0 quay.io/k8scsi/csi-provisioner:v1.4.0
docker tag quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0 quay.io/k8scsi/csi-attacher:v1.2.0
docker tag quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2 quay.io/k8scsi/csi-snapshotter:v1.2.2


docker rmi quay.azk8s.cn/cephcsi/cephcsi:v1.2.2
docker rmi quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0
docker rmi quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0
docker rmi quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0
docker rmi quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2

2.6 部署cluster

[root@k8smaster01 ceph]# kubectl create -f cluster.yaml
[root@k8smaster01 ceph]# kubectl logs -f -n rook-ceph rook-ceph-operator-cb47c46bc-pszfh #可查看部署log
[root@k8smaster01 ceph]# kubectl get pods -n rook-ceph -o wide #需要等待一定時間,部分中間態容器可能會波動
clipboard
提示:若部署失敗,master節點執行[root@k8smaster01 ceph]# kubectl delete -f ./
所有node節點執行如下清理操作:
rm -rf /var/lib/rook
/dev/mapper/ceph-*
dmsetup ls
dmsetup remove_all
dd if=/dev/zero of=/dev/sdb bs=512k count=1
wipefs -af /dev/sdb

2.7 部署Toolbox

toolbox是一個rook的工具集容器,該容器中的命令可以用來調試、測試Rook,對Ceph臨時測試的操作一般在這個容器內執行。
[root@k8smaster01 ceph]# kubectl create -f toolbox.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
NAME READY STATUS RESTARTS AGE
rook-ceph-tools-59b8cccb95-9rl5l 1/1 Running 0 15s

2.8 測試Rook

[root@k8smaster01 ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph status #查看Ceph狀態
clipboard
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph osd status
clipboard
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph df
clipboard
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# rados df
clipboard
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph auth ls #查看Ceph所有keyring
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph version
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
提示:更多Ceph管理參考《008.RHCS-管理Ceph存儲集群》,如上工具中也支持使用獨立的ceph命令ceph osd pool create ceph-test 512創建相關pool,實際Kubernetes rook中,不建議直接操作底層Ceph,以防止上層Kubernetes而言數據不一致性。

2.10 復制key和config

為方便管理,可將Ceph的keyring和config在master節點也創建一份,從而實現在Kubernetes外部宿主機對rook Ceph集群的簡單查看。
[root@k8smaster01 ~]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') cat /etc/ceph/ceph.conf > /etc/ceph/ceph.conf
[root@k8smaster01 ~]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') cat /etc/ceph/keyring > /etc/ceph/keyring

[root@k8smaster01 ceph]# tee /etc/yum.repos.d/ceph.repo <<-'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOF
[root@k8smaster01 ceph]# yum -y install ceph-common ceph-fuse #安裝客戶端
[root@k8smaster01 ~]# ceph status
clipboard
提示:rpm-nautilus版本建議和2.8所查看的版本一致。基於Kubernetes的rook Ceph集群,強烈不建議直接使用ceph命令進行管理,否則可能出現非一致性,對於rook集群的使用參考步驟三,ceph命令僅限於簡單的集群查看。

三 Ceph 塊存儲

3.1 創建StorageClass

在提供(Provisioning)塊存儲之前,需要先創建StorageClass和存儲池。K8S需要這兩類資源,才能和Rook交互,進而分配持久卷(PV)。
[root@k8smaster01 ceph]# kubectl create -f csi/rbd/storageclass.yaml
解讀:如下配置文件中會創建一個名為replicapool的存儲池,和rook-ceph-block的storageClass。
  1 apiVersion: ceph.rook.io/v1
  2 kind: CephBlockPool
  3 metadata:
  4   name: replicapool
  5   namespace: rook-ceph
  6 spec:
  7   failureDomain: host
  8   replicated:
  9     size: 3
 10 ---
 11 apiVersion: storage.k8s.io/v1
 12 kind: StorageClass
 13 metadata:
 14    name: rook-ceph-block
 15 provisioner: rook-ceph.rbd.csi.ceph.com
 16 parameters:
 17     clusterID: rook-ceph
 18     pool: replicapool
 19     imageFormat: "2"
 20     imageFeatures: layering
 21     csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
 22     csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
 23     csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
 24     csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
 25     csi.storage.k8s.io/fstype: ext4
 26 reclaimPolicy: Delete
[root@k8smaster01 ceph]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com 8m44s

3.2 創建PVC

[root@k8smaster01 ceph]# kubectl create -f csi/rbd/pvc.yaml
  1 apiVersion: v1
  2 kind: PersistentVolumeClaim
  3 metadata:
  4   name: block-pvc
  5 spec:
  6   storageClassName: rook-ceph-block
  7   accessModes:
  8   - ReadWriteOnce
  9   resources:
 10     requests:
 11       storage: 200Mi
[root@k8smaster01 ceph]# kubectl get pvc
[root@k8smaster01 ceph]# kubectl get pv
clipboard
解讀:如上創建相應的PVC,storageClassName:為基於rook Ceph集群的rook-ceph-block。

3.3 消費塊設備

[root@k8smaster01 ceph]# vi rookpod01.yaml
  1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   name: rookpod01
  5 spec:
  6   restartPolicy: OnFailure
  7   containers:
  8   - name: test-container
  9     image: busybox
 10     volumeMounts:
 11     - name: block-pvc
 12       mountPath: /var/test
 13     command: ['sh', '-c', 'echo "Hello World" > /var/test/data; exit 0']
 14   volumes:
 15   - name: block-pvc
 16     persistentVolumeClaim:
 17       claimName: block-pvc
[root@k8smaster01 ceph]# kubectl create -f rookpod01.yaml
[root@k8smaster01 ceph]# kubectl get pod
NAME READY STATUS RESTARTS AGE
rookpod01 0/1 Completed 0 5m35s
解讀:創建如上Pod,並掛載3.2所創建的PVC,等待執行完畢。

3.4 測試持久性

[root@k8smaster01 ceph]# kubectl delete pods rookpod01 #刪除rookpod01
[root@k8smaster01 ceph]# vi rookpod02.yaml
  1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   name: rookpod02
  5 spec:
  6   restartPolicy: OnFailure
  7   containers:
  8   - name: test-container
  9     image: busybox
 10     volumeMounts:
 11     - name: block-pvc
 12       mountPath: /var/test
 13     command: ['sh', '-c', 'cat /var/test/data; exit 0']
 14   volumes:
 15   - name: block-pvc
 16     persistentVolumeClaim:
 17       claimName: block-pvc
[root@k8smaster01 ceph]# kubectl create -f rookpod02.yaml
[root@k8smaster01 ceph]# kubectl logs rookpod02 test-container
Hello World
解讀:創建rookpod02,並使用所創建的PVC,測試持久性。
提示:更多Ceph塊設備知識參考《003.RHCS-RBD塊存儲使用》。

四 Ceph 對象存儲

4.1 創建CephObjectStore

在提供(object)對象存儲之前,需要先創建相應的支持,使用如下官方提供的默認yaml可部署對象存儲的CephObjectStore。
[root@k8smaster01 ceph]# kubectl create -f object.yaml
  1 apiVersion: ceph.rook.io/v1
  2 kind: CephObjectStore
  3 metadata:
  4   name: my-store
  5   namespace: rook-ceph
  6 spec:
  7   metadataPool:
  8     failureDomain: host
  9     replicated:
 10       size: 3
 11   dataPool:
 12     failureDomain: host
 13     replicated:
 14       size: 3
 15   preservePoolsOnDelete: false
 16   gateway:
 17     type: s3
 18     sslCertificateRef:
 19     port: 80
 20     securePort:
 21     instances: 1
 22     placement:
 23     annotations:
 24     resources:
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -l app=rook-ceph-rgw #部署完成會創建rgw的Pod
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-my-store-a-6bd6c797c4-7dzjr 1/1 Running 0 19s

4.2 創建StorageClass

使用如下官方提供的默認yaml可部署對象存儲的StorageClass。
[root@k8smaster01 ceph]# kubectl create -f storageclass-bucket-delete.yaml
  1 apiVersion: storage.k8s.io/v1
  2 kind: StorageClass
  3 metadata:
  4    name: rook-ceph-delete-bucket
  5 provisioner: ceph.rook.io/bucket
  6 reclaimPolicy: Delete
  7 parameters:
  8   objectStoreName: my-store
  9   objectStoreNamespace: rook-ceph
 10   region: us-east-1
[root@k8smaster01 ceph]# kubectl get sc
clipboard

4.3 創建bucket

使用如下官方提供的默認yaml可部署對象存儲的bucket。
[root@k8smaster01 ceph]# kubectl create -f object-bucket-claim-delete.yaml
  1 apiVersion: objectbucket.io/v1alpha1
  2 kind: ObjectBucketClaim
  3 metadata:
  4   name: ceph-delete-bucket
  5 spec:
  6   generateBucketName: ceph-bkt
  7   storageClassName: rook-ceph-delete-bucket

4.4 設置對象存儲訪問信息

[root@k8smaster01 ceph]# kubectl -n default get cm ceph-delete-bucket -o yaml | grep BUCKET_HOST | awk '{print $2}'
rook-ceph-rgw-my-store.rook-ceph
[root@k8smaster01 ceph]# kubectl -n rook-ceph get svc rook-ceph-rgw-my-store
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-rgw-my-store ClusterIP 10.102.165.187 <none> 80/TCP 7m34s
[root@k8smaster01 ceph]# export AWS_HOST=$(kubectl -n default get cm ceph-delete-bucket -o yaml | grep BUCKET_HOST | awk '{print $2}')
[root@k8smaster01 ceph]# export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-delete-bucket -o yaml | grep AWS_ACCESS_KEY_ID | awk '{print $2}' | base64 --decode)
[root@k8smaster01 ceph]# export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-delete-bucket -o yaml | grep AWS_SECRET_ACCESS_KEY | awk '{print $2}' | base64 --decode)
[root@k8smaster01 ceph]# export AWS_ENDPOINT='10.102.165.187'
[root@k8smaster01 ceph]# echo '10.102.165.187 rook-ceph-rgw-my-store.rook-ceph' >> /etc/hosts

4.5 測試訪問

[root@k8smaster01 ceph]# radosgw-admin bucket list #查看bucket
[root@k8smaster01 ceph]# yum --assumeyes install s3cmd #安裝S3客戶端
[root@k8smaster01 ceph]# echo "Hello Rook" > /tmp/rookObj #創建測試文件
[root@k8smaster01 ceph]# s3cmd put /tmp/rookObj --no-ssl --host=${AWS_HOST} --host-bucket= s3://ceph-bkt-377bf96f-aea8-4838-82bc-2cb2c16cccfb/test.txt #測試上傳至bucket
提示:更多rook 對象存儲使用,如創建用戶等參考:https://rook.io/docs/rook/v1.1/ceph-object.html。

五 Ceph 文件存儲

5.1 創建CephFilesystem

默認Ceph未部署對CephFS的支持,使用如下官方提供的默認yaml可部署文件存儲的filesystem。
[root@k8smaster01 ceph]# kubectl create -f filesystem.yaml
  1 apiVersion: ceph.rook.io/v1
  2 kind: CephFilesystem
  3 metadata:
  4   name: myfs
  5   namespace: rook-ceph
  6 spec:
  7   metadataPool:
  8     replicated:
  9       size: 3
 10   dataPools:
 11     - failureDomain: host
 12       replicated:
 13         size: 3
 14   preservePoolsOnDelete: true
 15   metadataServer:
 16     activeCount: 1
 17     activeStandby: true
 18     placement:
 19        podAntiAffinity:
 20           requiredDuringSchedulingIgnoredDuringExecution:
 21           - labelSelector:
 22               matchExpressions:
 23               - key: app
 24                 operator: In
 25                 values:
 26                 - rook-ceph-mds
 27             topologyKey: kubernetes.io/hostname
 28     annotations:
 29     resources:
[root@k8smaster01 ceph]# kubectl get cephfilesystems.ceph.rook.io -n rook-ceph
NAME ACTIVEMDS AGE
myfs 1 27s

5.2 創建StorageClass

[root@k8smaster01 ceph]# kubectl create -f csi/cephfs/storageclass.yaml
使用如下官方提供的默認yaml可部署文件存儲的StorageClass。
[root@k8smaster01 ceph]# vi csi/cephfs/storageclass.yaml
  1 apiVersion: storage.k8s.io/v1
  2 kind: StorageClass
  3 metadata:
  4   name: csi-cephfs
  5 provisioner: rook-ceph.cephfs.csi.ceph.com
  6 parameters:
  7   clusterID: rook-ceph
  8   fsName: myfs
  9   pool: myfs-data0
 10   csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
 11   csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
 12   csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
 13   csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
 14 reclaimPolicy: Delete
 15 mountOptions:
[root@k8smaster01 ceph]# kubectl get sc
NAME PROVISIONER AGE
csi-cephfs rook-ceph.cephfs.csi.ceph.com 10m

5.3 創建PVC

[root@k8smaster01 ceph]# vi rookpvc03.yaml
  1 apiVersion: v1
  2 kind: PersistentVolumeClaim
  3 metadata:
  4   name: cephfs-pvc
  5 spec:
  6   storageClassName: csi-cephfs
  7   accessModes:
  8   - ReadWriteOnce
  9   resources:
 10     requests:
 11       storage: 200Mi
[root@k8smaster01 ceph]# kubectl create -f rookpvc03.yaml
[root@k8smaster01 ceph]# kubectl get pv
[root@k8smaster01 ceph]# kubectl get pvc
clipboard

5.4 消費PVC

[root@k8smaster01 ceph]# vi rookpod03.yaml
  1 ---
  2 apiVersion: v1
  3 kind: Pod
  4 metadata:
  5   name: csicephfs-demo-pod
  6 spec:
  7   containers:
  8    - name: web-server
  9      image: nginx
 10      volumeMounts:
 11        - name: mypvc
 12          mountPath: /var/lib/www/html
 13   volumes:
 14    - name: mypvc
 15      persistentVolumeClaim:
 16        claimName: cephfs-pvc
 17        readOnly: false
[root@k8smaster01 ceph]# kubectl create -f rookpod03.yaml
[root@k8smaster01 ceph]# kubectl get pods
NAME READY STATUS RESTARTS AGE
csicephfs-demo-pod 1/1 Running 0 24s

六 設置dashboard

6.1 部署Node SVC

步驟2.4已創建dashboard,但僅使用clusterIP暴露服務,使用如下官方提供的默認yaml可部署外部nodePort方式暴露服務的dashboard。
[root@k8smaster01 ceph]# kubectl create -f dashboard-external-https.yaml
[root@k8smaster01 ceph]# vi dashboard-external-https.yaml
  1 apiVersion: v1
  2 kind: Service
  3 metadata:
  4   name: rook-ceph-mgr-dashboard-external-https
  5   namespace: rook-ceph
  6   labels:
  7     app: rook-ceph-mgr
  8     rook_cluster: rook-ceph
  9 spec:
 10   ports:
 11   - name: dashboard
 12     port: 8443
 13     protocol: TCP
 14     targetPort: 8443
 15   selector:
 16     app: rook-ceph-mgr
 17     rook_cluster: rook-ceph
 18   sessionAffinity: None
 19   type: NodePort
[root@k8smaster01 ceph]# kubectl get svc -n rook-ceph
clipboard

6.2 確認驗證

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath='{.data.password}' | base64 --decode #獲取初始密碼
瀏覽器訪問:https://172.24.8.71:31097
clipboard
賬號:admin,密碼:如上查找即可。
clipboard

七 集群管理

7.1 修改配置

默認創建Ceph集群的配置參數在創建Cluster的時候生成Ceph集群的配置參數,若需要在部署完成后修改相應參數,可通過如下操作試下:
[root@k8smaster01 ceph]# kubectl -n rook-ceph get configmap rook-config-override -o yaml #獲取參數
[root@k8snode02 ~]# cat /var/lib/rook/rook-ceph/rook-ceph.config #也可在任何node上查看
[root@k8smaster01 ceph]# kubectl -n rook-ceph edit configmap rook-config-override -o yaml #修改參數
  1 ……
  2 apiVersion: v1
  3 data:
  4   config: |
  5     [global]
  6     osd pool default size = 2
  7 ……
依次重啟ceph組件
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mgr-a-5699bb7984-kpxgp
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-a-85698dfff9-w5l8c
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mgr-a-d58847d5-dj62p
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-b-76559bf966-652nl
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-c-74dd86589d-s84cz
注意:ceph-mon, ceph-osd的delete最后是one-by-one的,等待ceph集群狀態為HEALTH_OK后再delete另一個。
提示:其他更多rook配置參數參考:https://rook.io/docs/rook/v1.1/。

7.2 創建Pool

對rook Ceph集群的pool創建,建議采用Kubernetes的方式,而不建議使用toolbox中的ceph命令。
使用如下官方提供的默認yaml可部署Pool。
[root@k8smaster01 ceph]# kubectl create -f pool.yaml
  1 apiVersion: ceph.rook.io/v1
  2 kind: CephBlockPool
  3 metadata:
  4   name: replicapool2
  5   namespace: rook-ceph
  6 spec:
  7   failureDomain: host
  8   replicated:
  9     size: 3
 10   annotations:

7.3 刪除Pool

[root@k8smaster01 ceph]# kubectl delete -f pool.yaml
提示:更多Pool管理,如糾刪碼池參考:https://rook.io/docs/rook/v1.1/ceph-pool-crd.html。

7.4 添加OSD節點

本步驟模擬將k8smaster的sdb添加為OSD。
[root@k8smaster01 ceph]# kubectl taint node k8smaster01 node-role.kubernetes.io/master- #允許調度Pod
[root@k8smaster01 ceph]# kubectl label nodes k8smaster01 ceph-osd=enabled #設置標簽
[root@k8smaster01 ceph]# vi cluster.yaml #追加master01的配置
……
- name: "k8smaster01"
config:
storeType: bluestore
devices:
- name: "sdb"
……
clipboard
[root@k8smaster01 ceph]# kubectl apply -f cluster.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -o wide -w
clipboard
ceph osd tree

7.5 刪除OSD節點

[root@k8smaster01 ceph]# kubectl label nodes k8smaster01 ceph-osd- #刪除標簽

[root@k8smaster01 ceph]# vi cluster.yaml #刪除如下master01的配置
  1 ……
  2     - name: "k8smaster01"
  3       config:
  4         storeType: bluestore
  5       devices:
  6       - name: "sdb"
  7 ……
clipboard
[root@k8smaster01 ceph]# kubectl apply -f cluster.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -o wide -w
[root@k8smaster01 ceph]# rm -rf /var/lib/rook

7.6 刪除Cluster

完整優雅刪除rook集群的方式參考:https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md

7.7 升級rook

參考:http://www.yangguanjun.com/2018/12/28/rook-ceph-practice-part2/。
更多官網文檔參考:https://rook.github.io/docs/rook/v1.1/
推薦博文:http://www.yangguanjun.com/archives/
https://sealyun.com/post/rook/


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM