1、RBD介紹
Ceph 可以同時提供對象存儲RADOSGW、塊存儲RBD、文件系統存儲Ceph FS,RBD 即RADOS Block Device 的簡稱,RBD 塊存儲是常用的存儲類型之一,RBD 塊設備類似磁盤可以被掛載,RBD 塊設備具有快照、多副本、克隆和一致性等特性,數據以條帶化的方式存儲在Ceph 集群的多個OSD 中。
條帶化技術就是一種自動的將I/O 的負載均衡到多個物理磁盤上的技術,條帶化技術就是將一塊連續的數據分成很多小部分並把他們分別存儲到不同磁盤上去。這就能使多個進程同時訪問數據的多個不同部分而不會造成磁盤沖突,而且在需要對這種數據進行順序訪問的時候可以獲得最大程度上的I/O 並行能力,從而獲得非常好的性能。
2、ceph端配置rbd
2.1、
ceph osd pool create <pool_name> pg_mun pgp_mun pgp是對pg的數據進行組合存儲,pgp通常等於pg ceph osd pool create myrbd1 64 64
2.2、
ceph osd pool application enable myrbd1 rbd
2.3、
rbd pool init -p myrbd1
2.4、創建img
rbd存儲池並不能直接用於塊設備,而是需要事先在其中按需創建映像(image),並把映像文件作為塊設備使用,rbd 命令可用於創建、查看及刪除塊設備相在的映像(image),以及克隆映像、創建快照、將映像回滾到快照和查看快照等管理操作,例如,下面的命令能夠創建一個名為myimg1 的映像。
在myrbd1的pool中創建一個mying1的rbd,其大小為5G rbd create mying1 --size 5G --pool myrbd1 后續步驟會使用myimg2 ,由於centos 系統內核較低無法掛載使用,因此只開啟部分特性。除了layering 其他特性需要高版本內核支持 rbd create myimg2 --size 3G --pool myrbd1 --image-format 2 --image-feature layering
2.5、查看pool中所有的img
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls --pool myrbd1 mying1 mying2
2.6、查看指定img的信息
cephadmin@ceph-deploy:~/ceph-cluster$ rbd --image mying1 --pool myrbd1 info rbd image 'mying1': size 5 GiB in 1280 objects order 22 (4 MiB objects) snapshot_count: 0 id: 14be30954ea4 block_name_prefix: rbd_data.14be30954ea4 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Sun Apr 3 22:54:06 2022 access_timestamp: Sun Apr 3 22:54:06 2022 modify_timestamp: Sun Apr 3 22:54:06 2022 cephadmin@ceph-deploy:~/ceph-cluster$ rbd --image myimg2 --pool myrbd1 info rbd image 'myimg2': size 3 GiB in 768 objects order 22 (4 MiB objects) snapshot_count: 0 id: 5f0f6fb9ea1d block_name_prefix: rbd_data.5f0f6fb9ea1d format: 2 features: layering op_features: flags: create_timestamp: Sun Apr 3 22:54:11 2022 access_timestamp: Sun Apr 3 22:54:11 2022 modify_timestamp: Sun Apr 3 22:54:11 2022
3、客戶端使用rbd
3.1、當前ceph的狀態
cephadmin@ceph-deploy:~/ceph-cluster$ ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 120 GiB 120 GiB 294 MiB 294 MiB 0.24 TOTAL 120 GiB 120 GiB 294 MiB 294 MiB 0.24 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 38 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 38 GiB default.rgw.log 3 32 3.6 KiB 177 408 KiB 0 38 GiB default.rgw.control 4 32 0 B 8 0 B 0 38 GiB default.rgw.meta 5 8 0 B 0 0 B 0 38 GiB cephfs-metadata 6 32 4.8 KiB 60 240 KiB 0 38 GiB cephfs-data 7 64 0 B 0 0 B 0 38 GiB myrbd1 8 64 405 B 7 48 KiB 0 38 GiB
3.2、客戶端配置ceph源
wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add - echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/source.list apt update
3.3、安裝ceph-common
apt install -y ceph-common
3.4、
cephadmin@ceph-deploy:~/ceph-cluster$ sudo scp ceph.conf ceph.client.admin.keyring root@192.168.1.180:/etc/ceph/
3.5、在客戶端映射img
root@ceph-client:~# rbd --pool myrbd1 map myimg2 /dev/rbd0 root@ceph-client:~# fdisk -l | grep rbd0 Disk /dev/rbd0: 3 GiB, 3221225472 bytes, 6291456 sectors
3.6、
root@ceph-client:~# mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=9, agsize=97280 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0 data = bsize=4096 blocks=786432, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 root@ceph-client:~# mount /dev/rbd0 /mnt root@ceph-client:~# df -TH|grep rbd0 /dev/rbd0 xfs 3.3G 38M 3.2G 2% /mnt
3.7、創建一個300M的文件
root@ceph-client:~# dd if=/dev/zero of=/mnt/rbd_test bs=1M count=300 300+0 records in 300+0 records out 314572800 bytes (315 MB, 300 MiB) copied, 4.83789 s, 65.0 MB/s root@ceph-client:~# ll -h /mnt/rbd_test -rw-r--r-- 1 root root 300M Apr 3 23:04 /mnt/rbd_test
3.8、
cephadmin@ceph-deploy:~/ceph-cluster$ ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 120 GiB 119 GiB 1.2 GiB 1.2 GiB 0.98 TOTAL 120 GiB 119 GiB 1.2 GiB 1.2 GiB 0.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 37 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 37 GiB default.rgw.log 3 32 3.6 KiB 177 408 KiB 0 37 GiB default.rgw.control 4 32 0 B 8 0 B 0 37 GiB default.rgw.meta 5 8 0 B 0 0 B 0 37 GiB cephfs-metadata 6 32 4.8 KiB 60 240 KiB 0 37 GiB cephfs-data 7 64 0 B 0 0 B 0 37 GiB myrbd1 8 64 310 MiB 95 931 MiB 0.80 37 GiB
4、使用普通用戶掛載rbd
4.1、在ceph-deploy上創建普通賬號及權限
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get-or-create client.jack mon 'allow r' osd 'allow rwx pool=myrbd1' [client.jack] key = AQDdt0liUbpCHRAA6EaxRQ0b+1KKEMbxm7T7PA== cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.jack [client.jack] key = AQDdt0liUbpCHRAA6EaxRQ0b+1KKEMbxm7T7PA== caps mon = "allow r" caps osd = "allow rwx pool=myrbd1" exported keyring for client.jack
# 創建空的keyring文件 cephadmin@ceph-deploy:~/ceph-cluster$ sudo ceph-authtool --create-keyring ceph.client.jack.keyring creating ceph.client.jack.keyring #導入client.kaka信息進入key.ring文件 cephadmin@ceph-deploy:~/ceph-cluster$ sudo ceph auth get client.jack -o ceph.client.jack.keyring exported keyring for client.jack cephadmin@ceph-deploy:~/ceph-cluster$ sudo ceph-authtool -l ceph.client.jack.keyring [client.jack] key = AQDdt0liUbpCHRAA6EaxRQ0b+1KKEMbxm7T7PA== caps mon = "allow r" caps osd = "allow rwx pool=myrbd1"
4.3、在客戶端安裝ceph-common
#1、安裝ceph源 root@client:~#wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add - root@client:~#echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/source.list root@client:~#apt update #2、安裝ceph-common root@client:~#apt install ceph-common
#同步ceph.conf和ceph.client.jack.keyring到客戶端 cephadmin@ceph-deploy:/etc/ceph$ sudo scp ceph.conf ceph.client.jack.keyring 192.168.1.180:/etc/ceph/ #客戶端驗證 root@client:/etc/ceph# ls ceph.client.admin.keyring ceph.client.jack.keyring ceph.conf rbdmap #權限驗證 root@client:/etc/ceph# ceph --user jack -s cluster: id: f0e7c394-989b-4803-86c3-5557ae25e814 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03 (age 10h) mgr: ceph-mgr01(active, since 10h), standbys: ceph-mgr02 osd: 16 osds: 11 up (since 10h), 11 in (since 10h) data: pools: 2 pools, 65 pgs objects: 94 objects, 310 MiB usage: 2.0 GiB used, 218 GiB / 220 GiB avail pgs: 65 active+clean
4.5、
root@client:/etc/ceph# rbd --user jack --pool myrbd1 map mying1 rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable myrbd1/mying1 object-map fast-diff deep-flatten". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address rbd: --user is deprecated, use --id #rbd文件映射失敗,問題為客戶端的kernel版本過低,不支持object-map fast-diff deep-flatten等參數。 #ubuntu20.04支持上述參數
在ceph-deploy上創建img文件mying3,不添加object-map fast-diff deep-flatten等參數
#創建img文件mying3,image-format格式為2,image特性值開啟layering cephadmin@ceph-deploy:/etc/ceph$ rbd create mying3 --size 3G --pool myrbd1 --image-format 2 --image-feature layering cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 mying1 mying2 mying3
在客戶端上重新映射mying3
root@client:/etc/ceph# rbd --user jack --pool myrbd1 map mying3 /dev/rbd1 #驗證,rbd1掛載成功 root@client:/etc/ceph# fdisk -l /dev/rbd1 Disk /dev/rbd1: 3 GiB, 3221225472 bytes, 6291456 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes #可以格式化使用
4.6、格式化並使用rbd
root@client:/etc/ceph# fdisk -l Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x870c4380 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 104855551 104853504 50G 83 Linux Disk /dev/rbd1: 3 GiB, 3221225472 bytes, 6291456 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes #格式化rbd1磁盤 root@client:/etc/ceph# mkfs.ext4 /dev/rbd1 mke2fs 1.44.1 (24-Mar-2018) Discarding device blocks: done Creating filesystem with 786432 4k blocks and 196608 inodes Filesystem UUID: dae3f414-ceae-4535-97d4-c369820f3116 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done #把/dev/rbd1掛載到/mnt目錄 root@client:/etc/ceph# mount /dev/rbd1 /mnt root@client:/etc/ceph# mount |grep /dev/rbd1 /dev/rbd1 on /mnt type ext4 (rw,relatime,stripe=1024,data=ordered) #在/mnt中創建一個200M的文件 root@client:/etc/ceph# cd /mnt root@client:/mnt# root@client:/mnt# root@client:/mnt# dd if=/dev/zero of=/mnt/rbd-test bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 0.205969 s, 1.0 GB/s root@client:/mnt# ll -h total 201M drwxr-xr-x 3 root root 4.0K Aug 23 13:06 ./ drwxr-xr-x 22 root root 326 Aug 17 09:56 ../ drwx------ 2 root root 16K Aug 23 13:04 lost+found/ -rw-r--r-- 1 root root 200M Aug 23 13:06 rbd-test
4.7、
root@ceph-client:/etc/ceph# lsmod |grep ceph libceph 315392 1 rbd libcrc32c 16384 3 xfs,raid456,libceph
4.8、rbd鏡像空間拉伸
可以擴展空間,不建議縮小空間
#當前mying3的空間 cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 -l NAME SIZE PARENT FMT PROT LOCK mying1 5 GiB 2 mying2 3 GiB 2 mying3 3 GiB 2 #拉伸mying3的鏡像為8G cephadmin@ceph-deploy:/etc/ceph$ rbd resize --pool myrbd1 --image mying3 --size 8G Resizing image: 100% complete...done. cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 -l NAME SIZE PARENT FMT PROT LOCK mying1 5 GiB 2 mying2 3 GiB 2 mying3 8 GiB 2 #在客戶端可以發現/dev/rbd1已經是8G了,但是文件系統還是3G root@client:/mnt# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 2.9G 209M 2.6G 8% /mnt root@client:/mnt# df -Th /dev/rbd1 Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 2.9G 209M 2.6G 8% /mnt #拉伸系統 #1、取消掛載 root@client:~# umount /mnt #2、拉伸/dev/rbd1 root@client:~# resize2fs /dev/rbd1 #3、重新掛載 root@client:~# mount /dev/rbd1 /mnt root@client:~# df -Th /dev/rbd1 Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 7.9G 214M 7.3G 3% /mnt root@client:~# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 7.9G 214M 7.3G 3% /mnt
4.9、開機自動掛載
root@client:~#cat /etc/rc.d/rc.local rbd --user jack -p myrbd1 map mying3 mount /dev/rbd1 /mnt [root@ceph-client2 ~]# chmod a+x /etc/rc.d/rc.local
5.1、查看鏡像詳細信息
cephadmin@ceph-deploy:/etc/ceph$ rbd --pool myrbd1 --image mying1 info rbd image 'mying1': size 5 GiB in 1280 objects order 22 (4 MiB objects) snapshot_count: 0 id: 144f88aecd24 block_name_prefix: rbd_data.144f88aecd24 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten #鏡像特性 op_features: flags: create_timestamp: Fri Aug 20 22:08:32 2021 access_timestamp: Fri Aug 20 22:08:32 2021 modify_timestamp: Fri Aug 20 22:08:32 2021 cephadmin@ceph-deploy:/etc/ceph$ rbd --pool myrbd1 --image mying2 info rbd image 'mying2': size 3 GiB in 768 objects order 22 (4 MiB objects) snapshot_count: 0 id: 1458dabfc2f1 block_name_prefix: rbd_data.1458dabfc2f1 format: 2 features: layering op_features: flags: create_timestamp: Fri Aug 20 22:11:30 2021 access_timestamp: Fri Aug 20 22:11:30 2021 modify_timestamp: Fri Aug 20 22:11:30 2021
cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 -l --format json --pretty-format [ { "image": "mying1", "id": "144f88aecd24", "size": 5368709120, "format": 2 }, { "image": "mying2", "id": "1458dabfc2f1", "size": 3221225472, "format": 2 }, { "image": "mying3", "id": "1893f853e249", "size": 3221225472, "format": 2 } ]
cephadmin@ceph-deploy:/etc/ceph$ rbd help feature enable usage: rbd feature enable [--pool <pool>] [--namespace <namespace>] [--image <image>] [--journal-splay-width <journal-splay-width>] [--journal-object-size <journal-object-size>] [--journal-pool <journal-pool>] <image-spec> <features> [<features> ...] Enable the specified image feature. Positional arguments <image-spec> image specification (example: [<pool-name>/[<namespace>/]]<image-name>) <features> image features [exclusive-lock, object-map, journaling] Optional arguments -p [ --pool ] arg pool name --namespace arg namespace name --image arg image name --journal-splay-width arg number of active journal objects --journal-object-size arg size of journal objects [4K <= size <= 64M] --journal-pool arg pool for journal objects
特性簡介
(1)layering: 支持鏡像分層快照特性,用於快照及寫時復制,可以對image 創建快照並保護,然后從快照克隆出新的image 出來,父子image 之間采用COW 技術,共享對象數據。 (2)striping: 支持條帶化v2,類似raid 0,只不過在ceph 環境中的數據被分散到不同的對象中,可改善順序讀寫場景較多情況下的性能。 (3)exclusive-lock: 支持獨占鎖,限制一個鏡像只能被一個客戶端使用。 (4)object-map: 支持對象映射(依賴exclusive-lock),加速數據導入導出及已用空間統計等,此特性開啟的時候,會記錄image 所有對象的一個位圖,用以標記對象是否真的存在,在一些場景下可以加速io。 (5)fast-diff: 快速計算鏡像與快照數據差異對比(依賴object-map)。 (6)deep-flatten: 支持快照扁平化操作,用於快照管理時解決快照依賴關系等。 (7)journaling: 修改數據是否記錄日志,該特性可以通過記錄日志並通過日志恢復數據(依賴獨占鎖),開啟此特性會增加系統磁盤IO 使用。 (8)jewel 默認開啟的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten
cephadmin@ceph-deploy:/etc/ceph$ rbd --pool myrbd1 --image mying2 info rbd image 'mying2': size 3 GiB in 768 objects order 22 (4 MiB objects) snapshot_count: 0 id: 1458dabfc2f1 block_name_prefix: rbd_data.1458dabfc2f1 format: 2 features: layering op_features: flags: create_timestamp: Fri Aug 20 22:11:30 2021 access_timestamp: Fri Aug 20 22:11:30 2021 modify_timestamp: Fri Aug 20 22:11:30 2021 #啟用指定存儲池中的指定鏡像的特性: cephadmin@ceph-deploy:/etc/ceph$ rbd feature enable exclusive-lock --pool myrbd1 --image mying2 cephadmin@ceph-deploy:/etc/ceph$ rbd feature enable object-map --pool myrbd1 --image mying2 cephadmin@ceph-deploy:/etc/ceph$ rbd feature enable fast-diff --pool myrbd1 --image mying2 #驗證 cephadmin@ceph-deploy:/etc/ceph$ rbd --pool myrbd1 --image mying2 info rbd image 'mying2': size 3 GiB in 768 objects order 22 (4 MiB objects) snapshot_count: 0 id: 1458dabfc2f1 block_name_prefix: rbd_data.1458dabfc2f1 format: 2 features: layering, exclusive-lock, object-map, fast-diff #特性已開啟 op_features: flags: object map invalid, fast diff invalid create_timestamp: Fri Aug 20 22:11:30 2021 access_timestamp: Fri Aug 20 22:11:30 2021 modify_timestamp: Fri Aug 20 22:11:30 2021
#禁用指定存儲池中指定鏡像的特性 cephadmin@ceph-deploy:/etc/ceph$ rbd feature disable fast-diff --pool myrbd1 --image mying2 cephadmin@ceph-deploy:/etc/ceph$ rbd --pool myrbd1 --image mying2 info rbd image 'mying2': size 3 GiB in 768 objects order 22 (4 MiB objects) snapshot_count: 0 id: 1458dabfc2f1 block_name_prefix: rbd_data.1458dabfc2f1 format: 2 features: layering, exclusive-lock #fast-diff特性已關閉 op_features: flags: create_timestamp: Fri Aug 20 22:11:30 2021 access_timestamp: Fri Aug 20 22:11:30 2021 modify_timestamp: Fri Aug 20 22:11:30 2021
root@ceph-client:/etc/ceph# umount /mnt root@ceph-client:/etc/ceph# rbd --pool myrbd1 unmap mying3
鏡像刪除后數據也會被刪除而且是無法恢復,因此在執行刪除操作的時候要慎重。
cephadmin@ceph-deploy:/etc/ceph$ rbd help rm usage: rbd rm [--pool <pool>] [--namespace <namespace>] [--image <image>] [--no-progress] <image-spec> Delete an image. Positional arguments <image-spec> image specification (example: [<pool-name>/[<namespace>/]]<image-name>) Optional arguments -p [ --pool ] arg pool name --namespace arg namespace name --image arg image name --no-progress disable progress output #刪除pool=myrbd1中的mying1鏡像 cephadmin@ceph-deploy:/etc/ceph$ rbd rm --pool myrbd1 --image mying1 Removing image: 100% complete...done. cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 mying2 mying3
刪除的鏡像數據無法恢復,但是還有另外一種方法可以先把鏡像移動到回收站,后期確認刪除的時候再從回收站刪除即可。
cephadmin@ceph-deploy:/etc/ceph$rbd trash --help status Show the status of this image. trash list (trash ls) List trash images. trash move (trash mv) Move an image to the trash. trash purge Remove all expired images from trash. trash remove (trash rm) Remove an image from trash. trash restore Restore an image from trash. #查看鏡像的狀態 cephadmin@ceph-deploy:/etc/ceph$ rbd status --pool=myrbd1 --image=mying3 Watchers: watcher=172.168.32.111:0/80535927 client.14665 cookie=18446462598732840962 cephadmin@ceph-deploy:/etc/ceph$ rbd status --pool=myrbd1 --image=mying2 Watchers: watcher=172.168.32.111:0/1284154910 client.24764 cookie=18446462598732840961 #將mying2進行移動到回收站 cephadmin@ceph-deploy:/etc/ceph$ rbd trash move --pool myrbd1 --image mying2 cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 mying3 #查看回收站的鏡像 cephadmin@ceph-deploy:/etc/ceph$ rbd trash list --pool myrbd1 1458dabfc2f1 mying2 #1458dabfc2f1鏡像ID,恢復鏡像時需要使用ID #從回收站還原鏡像 cephadmin@ceph-deploy:/etc/ceph$ rbd trash restore --pool myrbd1 --image mying2 --image-id 1458dabfc2f1 cephadmin@ceph-deploy:/etc/ceph$ rbd ls --pool myrbd1 -l NAME SIZE PARENT FMT PROT LOCK mying2 3 GiB 2 mying3 8 GiB 2 #永久刪除回收站的鏡像 #如果鏡像不再使用,可以直接使用trash remove 將其從回收站刪除 cephadmin@ceph-deploy:/etc/ceph$rbd trash remove --pool myrbd1 --image-id 1458dabfc2f1
9.1、鏡像快照命令
cephadmin@ceph-deploy:/etc/ceph$rbd help snap snap create (snap add) #創建快照 snap limit clear #清除鏡像的快照數量限制 snap limit set #設置一個鏡像的快照上限 snap list (snap ls) #列出快照 snap protect #保護快照被刪除 snap purge #刪除所有未保護的快照 snap remove (snap rm) #刪除一個快照 snap rename #重命名快照 snap rollback (snap revert) #還原快照 snap unprotect #允許一個快照被刪除(取消快照保護)
#在客戶端查看當前數據 root@client:~# ll -h /mnt total 201M drwxr-xr-x 3 root root 4.0K Aug 23 13:06 ./ drwxr-xr-x 22 root root 326 Aug 17 09:56 ../ drwx------ 2 root root 16K Aug 23 13:04 lost+found/ -rw-r--r-- 1 root root 200M Aug 23 13:06 rbd-test #在ceph-deploy創建快照 cephadmin@ceph-deploy:/etc/ceph$ rbd help snap create usage: rbd snap create [--pool <pool>] [--namespace <namespace>] [--image <image>] [--snap <snap>] [--skip-quiesce] [--ignore-quiesce-error] [--no-progress] <snap-spec> cephadmin@ceph-deploy:/etc/ceph$ rbd snap create --pool myrbd1 --image mying3 --snap mying3-snap-20210823 Creating snap: 100% complete...done. #驗證快照 cephadmin@ceph-deploy:/etc/ceph$ rbd snap list --pool myrbd1 --image mying3 SNAPID NAME SIZE PROTECTED TIMESTAMP 4 mying3-snap-20210823 8 GiB Mon Aug 23 14:01:30 2021
#客戶端刪除數據,並卸載rbd root@client:~# rm -rf /mnt/rbd-test root@client:~# ll /mnt total 20 drwxr-xr-x 3 root root 4096 Aug 23 14:03 ./ drwxr-xr-x 22 root root 326 Aug 17 09:56 ../ drwx------ 2 root root 16384 Aug 23 13:04 lost+found/ root@client:~# umount /mnt root@client:~# rbd --pool myrbd1 unmap --image mying3 #使用快照恢復數據 root@client:~# rbd help snap rollback usage: rbd snap rollback [--pool <pool>] [--namespace <namespace>] [--image <image>] [--snap <snap>] [--no-progress] <snap-spec> #回滾快照 cephadmin@ceph-deploy:/etc/ceph$ sudo rbd snap rollback --pool myrbd1 --image mying3 --snap mying3-snap-20210823 Rolling back to snapshot: 100% complete...done. #在客戶端驗證 root@client:~# rbd --pool myrbd1 map mying3 /dev/rbd1 root@client:~# mount /dev/rbd1 /mnt root@client:~# ll -h /mnt total 201M drwxr-xr-x 3 root root 4.0K Aug 23 13:06 ./ drwxr-xr-x 22 root root 326 Aug 17 09:56 ../ drwx------ 2 root root 16K Aug 23 13:04 lost+found/ -rw-r--r-- 1 root root 200M Aug 23 13:06 rbd-test #數據恢復成功
cephadmin@ceph-deploy:/etc/ceph$ rbd snap list --pool myrbd1 --image mying3 SNAPID NAME SIZE PROTECTED TIMESTAMP 4 mying3-snap-20210823 8 GiB Mon Aug 23 14:01:30 2021 cephadmin@ceph-deploy:/etc/ceph$ rbd snap rm --pool myrbd1 --image mying3 --snap mying3-snap-20210823 Removing snap: 100% complete...done. cephadmin@ceph-deploy:/etc/ceph$ rbd snap list --pool myrbd1 --image mying3 cephadmin@ceph-deploy:/etc/ceph$
#設置與修改快照數量限制 cephadmin@ceph-deploy:/etc/ceph$ rbd snap limit set --pool myrbd1 --image mying3 --limit 30 cephadmin@ceph-deploy:/etc/ceph$ rbd snap limit set --pool myrbd1 --image mying3 --limit 20 cephadmin@ceph-deploy:/etc/ceph$ rbd snap limit set --pool myrbd1 --image mying3 --limit 15 #清除快照數量限制 cephadmin@ceph-deploy:/etc/ceph$ rbd snap limit clear --pool myrbd1 --image mying3