006 管理Ceph的RBD塊設備


一, Ceph RBD的特性

支持完整和增量的快照

自動精簡配置

寫時復制克隆

動態調整大小

二、RBD基本應用

2.1 創建RBD池

[root@ceph2 ceph]# ceph osd pool create rbd 64
pool 'rbd' created
[root@ceph2 ceph]# ceph osd pool application enable rbd rbd
enabled application 'rbd' on pool 'rbd'

2.2 客戶端驗證

[root@ceph2 ceph]# ceph auth get-or-create client.rbd -o ./ceph.client.rbd.keyring     #導出秘鑰
[root@ceph2 ceph]# cat !$
cat ./ceph.client.rbd.keyring
[client.rbd]
    key = AQD+Qo5cdS0mOBAA6bPb/KKzSkSvwCRfT0nLXA==
[root@ceph2 ceph]# scp ./ceph.client.rbd.keyring ceph1:/etc/ceph/   #把秘鑰發送到客戶端
[root@ceph1 ceph-ansible]# ceph -s --id rbd     #客戶端驗證失敗,沒有授權
2019-03-17 20:54:53.528212 7fb6082aa700  0 librados: client.rbd authentication error (13) Permission denied
[errno 13] error connecting to the cluster
[root@ceph2 ceph]# ceph auth get client.rbd
exported keyring for client.rbd   #查看權限信息
[client.rbd]
    key = AQD+Qo5cdS0mOBAA6bPb/KKzSkSvwCRfT0nLXA==
[root@ceph2 ceph]# ceph auth caps  client.rbd  mon 'allow r' osd 'allow rwx pool=rbd'     #修改權限
updated caps for client.rbd
[root@ceph2 ceph]# ceph auth get client.rbd     #查看,對rbd池有權限
exported keyring for client.rbd
[client.rbd]
    key = AQD+Qo5cdS0mOBAA6bPb/KKzSkSvwCRfT0nLXA==
    caps mon = "allow r"
    caps osd = "allow rwx pool=rbd"
[root@ceph1 ceph-ansible]# ceph -s --id rbd    #客戶端驗證成功
  cluster:
    id:     35a91e48-8244-4e96-a7ee-980ab989d20d
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum ceph2,ceph3,ceph4
    mgr: ceph4(active), standbys: ceph2, ceph3
    osd: 9 osds: 9 up, 9 in 
  data:
    pools:   2 pools, 192 pgs
    objects: 5 objects, 22678 bytes
    usage:   975 MB used, 133 GB / 134 GB avail
    pgs:     192 active+clean
[root@ceph1 ceph-ansible]# ceph osd pool ls --id rbd
testpool
rbd
[root@ceph1 ceph-ansible]# rbd ls rbd --id rbd

2.3 創建塊設備

[root@ceph1 ceph-ansible]# rbd create --size 1G rbd/testimg  --id rbd
[root@ceph1 ceph-ansible]# rbd ls rbd --id rbd
testimg

2.4 映射塊設備

[root@ceph1 ceph-ansible]# rbd map  rbd/testimg  --id rbd    #映射失敗
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@ceph1 ceph-ansible]# rbd info rbd/testimg --id rbd    
rbd image 'testimg':
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.fb1574b0dc51
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten    #要支持所有的屬性,必須內核在4.4以上,目前內核不支持,必須禁掉,否則不能創建映射
    flags: 
    create_timestamp: Sun Mar 17 21:08:01 2019
[root@ceph1 ceph-ansible]# rbd feature disable rbd/testimg exclusive-lock, object-map, fast-diff, deep-flatten --id rbd  #去掉其他屬性
[root@ceph1 ceph-ansible]# rbd info rbd/testimg --id rbd
rbd image 'testimg':
    size 1024 MB in 256 objects                    #大小是1024M,被分成256個對象
    order 22 (4096 kB objects)                     #每個對象大小是4M
    block_name_prefix: rbd_data.fb1574b0dc51       #命名格式
    format: 2                                      #指定磁盤格式,raw裸磁盤。qcow和qcow2支持更豐富的特性(精簡置備)
    features: layering                             #開啟某種特性
    flags: 
    create_timestamp: Sun Mar 17 21:08:01 2019
[root@ceph1 ceph-ansible]# rbd map rbd/testimg  --id rbd   #映射成功
/dev/rbd0
[root@ceph1 ceph-ansible]# fdisk -l     #會多一個/dev/rbd塊設備
……略
Disk /dev/rbd0: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

2.5 客戶端操作

RBD客戶端說明

Ceph客戶端可使用原生linux內核模塊krbd掛載RBD鏡像

對於OpenStack和libvirt等雲和虛擬化解決方案使用librbd將RBD鏡像作為設備提供給虛擬機實例

librbd無法利用linux頁面緩存,所以它包含了自己的內存內緩存 ,稱為RBD緩存

RBD緩存是使用客戶端上的內存

RBD緩存又分為兩種模式:

回寫(write back):數據先寫入本地緩存,定時刷盤

直寫(derect back):數據直接寫入磁盤

RBD緩存參數說明,優化配置

RBD緩存參數必須添加到發起I/O請求的計算機上的配置文件的[client]部分中。

操作

[root@ceph1 ceph-ansible]# mkfs.xfs /dev/rbd0

[root@ceph1 ceph-ansible]# mkdir /mnt/ceph

[root@ceph1 ceph-ansible]# mount /dev/rbd0 /mnt/ceph

[root@ceph1 ceph-ansible]# df -hT

嘗試寫數據

[root@ceph1 ceph-ansible]# cd /mnt/ceph/

[root@ceph1 ceph]# touch 111

[root@ceph1 ceph]# echo "A boy and a girl" > 111

[root@ceph1 ceph]# ll

客戶端正常掛載使用

2.6 創建另一個塊設備測試

[root@ceph1 ceph]# rbd create --size 1G rbd/cephrbd1 --id rbd

[root@ceph1 ceph]# rbd feature disable rbd/cephrbd1 exclusive-lock, object-map, fast-diff, deep-flatten  --id rbd

[root@ceph1 ceph]# rbd info rbd/cephrbd1 --id rbd

[root@ceph1 ceph]# rbd ls --id rbd

[root@ceph1 ceph]# mkdir /mnt/ceph2

[root@ceph1 ceph]# rbd map rbd/cephrbd1 --id rbd

[root@ceph1 ceph]# mkfs.xfs /dev/rbd1

[root@ceph1 ceph]# mount /dev/rbd1 /mnt/ceph2

[root@ceph1 ceph]# df -hT

2.7 塊設備的其他操作

塊設備在底層還是對象存儲,一個帶下為1024M的分成256個對象,每個對象的大小是4M

存儲這些對象的命名格式,可以根據這些命名格式,可以給用戶基於對象前綴授權

可以很具rados查詢底層對象

[root@ceph1 ceph]# rados  -p rbd ls --id rbd

rbd_header.fb1574b0dc51
rbd_data.fb2074b0dc51.000000000000001f
rbd_data.fb1574b0dc51.00000000000000ba
rbd_data.fb2074b0dc51.00000000000000ff
rbd_data.fb1574b0dc51.0000000000000000
rbd_data.fb1574b0dc51.000000000000007e
rbd_data.fb1574b0dc51.000000000000007d
rbd_directory
rbd_header.fb2074b0dc51
rbd_data.fb2074b0dc51.000000000000005d
rbd_data.fb2074b0dc51.00000000000000d9
rbd_data.fb2074b0dc51.000000000000009b
rbd_data.fb1574b0dc51.000000000000001f
rbd_info
rbd_data.fb2074b0dc51.000000000000003e
rbd_data.fb1574b0dc51.00000000000000ff
rbd_data.fb2074b0dc51.000000000000007d
rbd_id.testimg
rbd_data.fb2074b0dc51.000000000000007c
rbd_id.cephrbd1
rbd_data.fb1574b0dc51.00000000000000d9
rbd_data.fb2074b0dc51.000000000000007e
rbd_data.fb2074b0dc51.00000000000000ba
rbd_data.fb1574b0dc51.000000000000003e
rbd_data.fb1574b0dc51.000000000000007c
rbd_data.fb1574b0dc51.00000000000000f8
rbd_data.fb1574b0dc51.000000000000005d
rbd_data.fb2074b0dc51.00000000000000f8
rbd_data.fb1574b0dc51.0000000000000001
rbd_data.fb1574b0dc51.000000000000009b
rbd_data.fb2074b0dc51.0000000000000000
rbd_data.fb2074b0dc51.0000000000000001
格式

查看狀態

[root@ceph1 ceph]# rbd status rbd/testimg --id rbd        #查看狀態
Watchers:
watcher=172.25.250.10:0/310641078 client.64285 cookie=1
[root@ceph1 ceph]# rbd status rbd/cephrbd1 --id rbd
Watchers:
watcher=172.25.250.10:0/310641078 client.64285 cookie=2
[root@ceph1 ceph]# rbd rm testimg --id rbd        #執行刪除動作,由於正在使用,不能刪除
2019-03-17 21:48:32.890752 7fac95ffb700 -1 librbd::image::RemoveRequest: 0x55785f1ee540 check_image_watchers: image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
[root@ceph1 ceph]# rbd du testimg --id rbd     #查看大小
warning: fast-diff map is not enabled for testimg. operation may be slow.
NAME    PROVISIONED   USED 
testimg       1024M 53248k 

三、RBD的復制與修改

3.1 RBD的復制

[root@ceph1 ceph]# rbd cp testimg testimg-copy --id rbd
Image copy: 100% complete...done.
[root@ceph1 ceph]# rbd ls --id rbd
cephrbd1
testimg
testimg-copy

 3.2 客戶端掛載使用

[root@ceph1 ceph]# rbd map rbd/testimg-copy --id rbd         #映射復制的塊設備
/dev/rbd2
[root@ceph1 ceph]# mkdir /mnt/ceph3
[root@ceph1 ceph]# mount /dev/rbd2 /mnt/ceph3                #掛載失敗,原設備正在掛載使用,掛載不上
mount: wrong fs type, bad option, bad superblock on /dev/rbd2,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@ceph1 ceph]# cd
[root@ceph1 ~]# umount /mnt/ceph                             #取消掛載原設備
[root@ceph1 ~]# mount /dev/rbd2 /mnt/ceph3                   #再次掛載,成功
[root@ceph1 ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda1      xfs        40G  1.7G   39G   5% /
devtmpfs       devtmpfs  893M     0  893M   0% /dev
tmpfs          tmpfs     920M     0  920M   0% /dev/shm
tmpfs          tmpfs     920M   17M  904M   2% /run
tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup
tmpfs          tmpfs     184M     0  184M   0% /run/user/0
/dev/rbd1      xfs       2.0G   33M  2.0G   2% /mnt/ceph2
/dev/rbd2      xfs       2.0G   33M  2.0G   2% /mnt/ceph3

 3.3 檢查數據

[root@ceph1 ~]# cd /mnt/ceph3
[root@ceph1 ceph3]# ll               #復制成功
total 4
-rw-r--r-- 1 root root 17 Mar 17 21:21 111

3.4 RBD的刪除與恢復

[root@ceph1 ceph3]# rbd trash mv testimg --id rbd   #刪除rbd需要先將其移動至回收站內
[root@ceph1 ceph3]# rbd ls --id rbd                 #已經移至回收站
cephrbd1
testimg-copy
[root@ceph1 ceph3]# rbd trash ls --id rbd           #查看回收站有一個快設備
fb1574b0dc51 testimg
[root@ceph1 ceph3]# rbd trash restore fb1574b0dc51 --id rbd  #從回收站恢復
[root@ceph1 ceph3]# rbd ls --id rbd                 #已經恢復
cephrbd1
testimg
testimg-copy
從回收站刪除RBD
rbd trash rm [pool-name/]image-name

四、RBD快照操作

4.1創建快照

[root@ceph1 ceph]# rbd snap create testimg-copy@snap1 --id rbd
[root@ceph1 ceph]# rbd snap ls testimg-copy --id rbd
SNAPID NAME SIZE TIMESTAMP 
4 snap1 2048 MB Sun Mar 17 22:08:12 2019
[root@ceph1 ceph]# rbd showmapped --id rbd
id pool image snap device 
0 rbd testimg - /dev/rbd0 
1 rbd cephrbd1 - /dev/rbd1 
2 rbd testimg-copy - /dev/rbd2
[root@ceph1 ceph3]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 40G 1.7G 39G 5% /
devtmpfs devtmpfs 893M 0 893M 0% /dev
tmpfs tmpfs 920M 0 920M 0% /dev/shm
tmpfs tmpfs 920M 17M 904M 2% /run
tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs tmpfs 184M 0 184M 0% /run/user/0
/dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2
/dev/rbd2 xfs 2.0G 33M 2.0G 2% /mnt/ceph3
[root@ceph1 ceph]# cd /mnt/ceph3
[root@ceph1 ceph3]# ll
total 4
-rw-r--r-- 1 root root 17 Mar 17 21:21 111

4.2 新加數據

[root@ceph1 ceph3]# touch 222
[root@ceph1 ceph3]# echo aaa>test
[root@ceph1 ceph3]# echo bbb>test1
[root@ceph1 ceph3]# ls
111  222  test  test1

4.3 再創建快照

root@ceph1 ceph3]# rbd snap ls testimg-copy --id rbd
SNAPID NAME     SIZE TIMESTAMP                
     4 snap1 2048 MB Sun Mar 17 22:08:12 2019 
     5 snap2 2048 MB Sun Mar 17 22:15:14 2019 

4.4 刪除數據並恢復

[root@ceph1 ceph3]# rm -rf test      #刪除數據
[root@ceph1 ceph3]# rm -rf 111
[root@ceph1 ceph3]# ls
222  test1
[root@ceph1 ceph3]#rbd snap revert testimg-copy@snap2 --id rbd   #回退快照
Rolling back to snapshot: 100% complete...done.
[root@ceph1 ceph3]# ls  #檢查沒有恢復
222  test1
[root@ceph1 ceph3]# cd
[root@ceph1 ~]# umount /mnt/ceph3    #取消掛載
[root@ceph1 ~]# mount  /dev/rbd2 /mnt/ceph3   #掛載失敗
mount: wrong fs type, bad option, bad superblock on /dev/rbd2,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@ceph1 ~]#  rbd snap revert testimg-copy@snap2 --id rbd    #需要重新回滾快照
Rolling back to snapshot: 100% complete...done.
[root@ceph1 ~]# mount  /dev/rbd2 /mnt/ceph3    #正常掛載
[root@ceph1 ~]# cd /mnt/ceph3
[root@ceph1 ceph3]# ll       #數據恢復
total 12
-rw-r--r-- 1 root root 17 Mar 17 21:21 111
-rw-r--r-- 1 root root  0 Mar 17 22:13 222
-rw-r--r-- 1 root root  4 Mar 17 22:13 test
-rw-r--r-- 1 root root  4 Mar 17 22:14 test1

 

 

 4.5 刪除快照

[root@ceph1 ceph3]# rbd snap rm testimg-copy@snap1 --id rbd    #刪除一個快照
Removing snap: 100% complete...done.
[root@ceph1 ceph3]# rbd snap ls testimg-copy --id rbd          
SNAPID NAME     SIZE TIMESTAMP                
     5 snap2 2048 MB Sun Mar 17 22:15:14 2019
[root@ceph1 ceph3]# rbd snap purge testimg-copy --id rbd         #清除所有快照
Removing all snapshots: 100% complete...done.
[root@ceph1 ceph3]# rbd snap ls testimg-copy --id rbd 

五、RBD克隆

RBD克隆是RBD鏡像副本,將RBD快照作基礎,轉換為徹底獨立於原始來源的RBD鏡像

5.1 創建鏡像克隆

[root@ceph1 ceph3]# rbd snap create testimg-copy@for-clone --id rbd
[root@ceph1 ceph3]# rbd snap ls testimg-copy --id rbd
SNAPID NAME         SIZE TIMESTAMP                
     8 for-clone 2048 MB Sun Mar 17 22:32:56 2019 
[root@ceph1 ceph3]# rbd snap  protect testimg-copy@for-clone --id rbd
[root@ceph1 ceph3]# rbd clone testimg-copy@for-clone  rbd/test-clone --id rbd
[root@ceph1 ceph3]# rbd ls --id rbd
cephrbd1
test-clone
testimg
testimg-copy

5.2 客戶端掛載

[root@ceph1 ceph3]# rbd info test-clone --id rbd
rbd image 'test-clone':
    size 2048 MB in 512 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.fb773d1b58ba
    format: 2
    features: layering
    flags: 
    create_timestamp: Sun Mar 17 22:35:12 2019
    parent: rbd/testimg-copy@for-clone
    overlap: 2048 MB
[root@ceph1 ceph3]# rbd flatten test-clone --id rbd      #執行合並操作  
Image flatten: 100% complete...done.
[root@ceph1 ceph3]# rbd info test-clone --id rbd
rbd image 'test-clone':
    size 2048 MB in 512 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.fb773d1b58ba
    format: 2
    features: layering
    flags: 
    create_timestamp: Sun Mar 17 22:35:12 2019
[root@ceph1 ceph3]# rbd map rbd/test-clone  --id rbd   #映射塊設備 
/dev/rbd3
[root@ceph1 ceph3]# cd 
[root@ceph1 ~]# mount /dev/rbd3  /mnt/ceph              #掛載失敗,是因為原設備被掛載
mount: wrong fs type, bad option, bad superblock on /dev/rbd3,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@ceph1 ~]# umount /mnt/ceph3                      #取消掛載元塊設備
[root@ceph1 ~]# mount /dev/rbd3  /mnt/ceph             #成功掛載
[root@ceph1 ~]# cd /mnt/ceph
[root@ceph1 ceph]# ll
total 12
-rw-r--r-- 1 root root 17 Mar 17 21:21 111
-rw-r--r-- 1 root root  0 Mar 17 22:13 222
-rw-r--r-- 1 root root  4 Mar 17 22:13 test
-rw-r--r-- 1 root root  4 Mar 17 22:14 test1
[root@ceph1 ceph]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda1      xfs        40G  1.7G   39G   5% /
devtmpfs       devtmpfs  893M     0  893M   0% /dev
tmpfs          tmpfs     920M     0  920M   0% /dev/shm
tmpfs          tmpfs     920M   17M  904M   2% /run
tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup
tmpfs          tmpfs     184M     0  184M   0% /run/user/0
/dev/rbd1      xfs       2.0G   33M  2.0G   2% /mnt/ceph2
/dev/rbd3      xfs       2.0G   33M  2.0G   2% /mnt/ceph

5.3  刪除快照

[root@ceph2 ceph]# rbd snap rm testimg-copy@for-clone
Removing snap: 0% complete...failed.
rbd: snapshot 'for-clone' 2019-03-17 22:46:23.648170 7f39da26ad40 -1 librbd::Operations: snapshot is protected
is protected from removal.
[root@ceph2 ceph]# rbd snap unprotect  testimg-copy@for-clone
[root@ceph2 ceph]# rbd snap rm testimg-copy@for-clone
Removing snap: 100% complete...done.

5.5 再次克隆

[root@ceph1 ~]# rbd snap create testimg-copy@for-clone2 --id rbd
[root@ceph1 ~]# rbd snap protect testimg-copy@for-clone2 --id rbd
[root@ceph1 ~]# rbd clone  testimg-copy@for-clone2 test-clone2 --id rbd
2019-03-17 22:56:38.296404 7f9a7bfff700 -1 librbd::image::CreateRequest: 0x7f9a680204f0 handle_create_id_object: error creating RBD id object: (17) File exists
2019-03-17 22:56:38.296440 7f9a7bfff700 -1 librbd::image::CloneRequest: error creating child: (17) File exists
rbd: clone error: (17) File exists
[root@ceph2 ceph]# rbd rm test-clone2
Removing image: 100% complete...done.
[root@ceph2 ceph]# rbd clone testimg-copy@for-clone2  rbd/test-clone2 --id rbd
[root@ceph2 ceph]# rbd flatten test-clone2 --id rbd
Image flatten: 100% complete...done.
[root@ceph1 ~]# rbd map rbd/test-clone2  --id rbd
/dev/rbd4
[root@ceph1 ~]# mkdir /mnt/ceph4
[root@ceph1 ~]# umount /mnt/ceph
[root@ceph1 ~]# mount /dev/rbd4  /mnt/ceph4     #如果在這里掛載不上,請查看元塊設備和元數據相關的塊設備是否掛載,如果有,取消元塊設備掛載后,再次掛載此設備
[root@ceph1 ~]# cd /mnt/ceph4
[root@ceph1 ceph4]# ll
total 12
-rw-r--r-- 1 root root 17 Mar 17 21:21 111
-rw-r--r-- 1 root root  0 Mar 17 22:13 222
-rw-r--r-- 1 root root  4 Mar 17 22:13 test
-rw-r--r-- 1 root root  4 Mar 17 22:14 test1

 5.6 查看一個子鏡像的克隆

[root@ceph2 ceph]# rbd clone testimg-copy@for-clone2  test-clone3
[root@ceph2 ceph]# rbd info test-clone3
rbd image 'test-clone3':
    size 2048 MB in 512 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.fbba3d1b58ba
    format: 2
    features: layering
    flags: 
    create_timestamp: Sun Mar 17 23:07:34 2019
    parent: rbd/testimg-copy@for-clone2
    overlap: 2048 MB
[root@ceph2 ceph]# rbd children testimg-copy@for-clone2   #查看擁有的子鏡像
rbd/test-clone3

 六、實現開機自啟

在關機開機之后,掛載會取消,寫到/etc/fstab中寫入開機掛載,但是必須要map映射之后。

6.1配置/etc/fstab 文件

[root@ceph1 ceph]# vim /etc/fstab
/dev/rbd/rbd/testimg-copy  /mnt/ceph   xfs  defaults,_netdev   0 0 
/dev/rbd/rbd/cephrbd1      /mnt/ceph2  xfs  defaults,_netdev   0 0 

6.2 配置rbdmap

root@ceph1 ceph4]# cd /etc/ceph/
[root@ceph1 ceph]# ls
ceph.client.ning.keyring  ceph.client.rbd.keyring  ceph.conf  ceph.d  rbdmap
[root@ceph1 ceph]# vim rbdmap 
# RbdDevice             Parameters
#poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring
rbd/testimg-copy  id=rbd,keyring=/etc/ceph/ceph.client.rbd.keyring
rbd/cephrbd1      id=rbd,keyring=/etc/ceph/ceph.client.rbd.keyring

6.3啟動rbd服務並重啟驗證

[root@ceph1 ceph]# systemctl start rbdmap     #啟動服務
[root@ceph1 ~]# systemctl enable rbdmap       #開機自啟
Created symlink from /etc/systemd/system/multi-user.target.wants/rbdmap.service to /usr/lib/systemd/system/rbdmap.service.
[root@ceph1 ceph]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda1      xfs        40G  1.7G   39G   5% /
devtmpfs       devtmpfs  893M     0  893M   0% /dev
tmpfs          tmpfs     920M     0  920M   0% /dev/shm
tmpfs          tmpfs     920M   17M  904M   2% /run
tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup
tmpfs          tmpfs     184M     0  184M   0% /run/user/0
/dev/rbd0 xfs 2.0G 33M 2.0G 2% /mnt/ceph /dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2
[root@ceph1 ~]# reboot                      #重啟

[root@ceph1 ~]# df -hT                      #開機啟動,掛載正常
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda1      xfs        40G  1.7G   39G   5% /
devtmpfs       devtmpfs  893M     0  893M   0% /dev
tmpfs          tmpfs     920M     0  920M   0% /dev/shm
tmpfs          tmpfs     920M   17M  904M   2% /run
tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/rbd0 xfs 2.0G 33M 2.0G 2% /mnt/ceph /dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2
tmpfs          tmpfs     184M     0  184M   0% /run/user/0

驗證成功!!!


博主聲明:本文的內容來源主要來自譽天教育晏威老師,由本人實驗完成操作驗證,需要的博友請聯系譽天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉載,謝謝!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM