ceph-塊存儲客戶端


ceph塊存儲

ceph塊設備,以前稱為RADOS塊設備,為客戶機提供可靠性、分布式和高性能的塊存儲磁盤。RADOS塊設備利用librbd庫並以順序的形式在ceph集群的多個osd上存儲數據塊。RBD是由ceph的RADOS層支持,因此每個塊設備都分布在多個ceph節點上,提供了性能和優異的可靠性。RBD有linux內核的本地支持,這意味着RBD驅動程序從過去幾年就與linux內核集成的很好。除了可靠性和性能外,RBD還提供了企業特性,例如完整和增量快照、瘦配置、寫時復制克隆、動態調整大小等,RBD還支持內存緩存,這大大提高了性能。

安裝ceph塊存儲客戶端

創建ceph塊客戶端用戶名和認證密鑰

[ceph-admin@ceph-node1 my-cluster]$ ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' | tee ./ceph.client.rbd.keyring
[client.rbd]
    key = AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==
    
[ceph-admin@ceph-node1 my-cluster]$ ceph auth get client.rbd
exported keyring for client.rbd
[client.rbd]
    key = AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==
    caps mon = "allow r"
    caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=rbd"

將密鑰文件和配置文件拷貝到客戶端

[ceph-admin@ceph-node1 my-cluster]$ scp ceph.client.rbd.keyring /etc/ceph/ceph.conf root@192.168.0.123:/etc/ceph

檢查客戶端是否符合塊設備環境要求

[root@localhost ~]# uname -r
3.10.0-862.el7.x86_64
[root@localhost ~]# modprobe rbd
[root@localhost ~]# echo $?
0

安裝ceph客戶端

[root@localhost ~]# wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
[root@localhost ~]# yum install -y ceph

測試密鑰連接集群

[root@localhost ~]# ceph -s --name client.rbd
  cluster:
    id:     cde2c9f7-009e-4bb4-a206-95afa4c43495
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3
    osd: 9 osds: 9 up, 9 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   9.06GiB used, 171GiB / 180GiB avail
    pgs:

客戶端創建塊設備及映射

創建rbd池

[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools
[ceph-admin@ceph-node1 my-cluster]$ ceph osd pool create rbd 128
pool 'rbd' created
[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools
1 rbd,

客戶端創建塊設備

[root@localhost ceph]# rbd create rbd1 --size 10240 --name client.rbd

查看

[root@localhost ceph]# rbd ls -p rbd --name client.rbd
rbd1
[root@localhost ceph]# rbd list --name client.rbd
rbd1
[root@localhost ceph]# rbd --image rbd1 info --name client.rbd
rbd image 'rbd1':
    size 10GiB in 2560 objects
    order 22 (4MiB objects)
    block_name_prefix: rbd_data.faa76b8b4567
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags: 
    create_timestamp: Thu Feb 14 17:53:54 2019

 映射到客戶端

[root@localhost ceph]# rbd map --image rbd1 --name client.rbd
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

映射報錯

layering:分層支持
exclusive-lock:排它鎖定支持對
object-map:對象映射支持,需要排它所(exclusive-lock)。
deep-flatten:快照平支持(snapshot flatten support)
fast-diff:在client-node1上使用krbd(內核rbd)客戶機進行快速diff計算(需要對象映射),我們將無法在centos內核3.10上映射塊設備映像,因為該內核不支持對象映射(object-map)、深平(deep-flatten)和快速diff(fast-diff)(在內核4.9中引入了支持)。為了解決這個問題,我們將禁用不支持的特性,有幾個選項可以做到這一點:

1)動態禁用

[root@localhost ceph]# rbd feature disable rbd1 exclusive-lock object-map fast-diff deep-flatten --name client.rbd

2)創建rbd鏡像時之啟用分層特性

[root@localhost ceph]# rbd create rbd2 --size 10240 --image-feature layering --name client.rbd

3)ceph配置文件中禁用

rbd default features = 1

再次映射到客戶端

[root@localhost ceph]# rbd map --image rbd1 --name client.rbd
/dev/rbd0
[root@localhost ceph]# rbd showmapped --name client.rbd
id pool image snap device    
0  rbd  rbd1  -    /dev/rbd0

創建文件系統,並掛載

[root@localhost ceph]# fdisk -l /dev/rbd0

Disk /dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

[root@localhost ceph]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ceph]# mkdir /mnt/ceph-disk1
[root@localhost ceph]# mount /dev/rbd0 /mnt/ceph-disk1
[root@localhost ceph]# df -h /mnt/ceph-disk1
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0        10G   33M   10G   1% /mnt/ceph-disk1

寫入數據測試

[root@localhost ceph]# ll /mnt/ceph-disk1/
total 0
[root@localhost ceph]# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.127818 s, 820 MB/s
[root@localhost ceph]# ll /mnt/ceph-disk1/
total 102400
-rw-r--r-- 1 root root 104857600 Feb 15 10:47 file1
[root@localhost ceph]# df -h /mnt/ceph-disk1/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0        10G  133M  9.9G   2% /mnt/ceph-disk1

 開機自動掛載

下載腳本

[root@localhost ceph]# wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount
[root@localhost ceph]# chmod +x /usr/local/bin/rbd-mount
[root@localhost ceph]# cat /usr/local/bin/rbd-mount
#!/bin/bash

# Pool name where block device image is stored
export poolname=rbd
 
# Disk image name
export rbdimage=rbd1
 
# Mounted Directory
export mountpoint=/mnt/ceph-disk1
 
# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
   modprobe rbd
   rbd feature disable $rbdimage object-map fast-diff deep-flatten
   rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
   mkdir -p $mountpoint
   mount /dev/rbd/$poolname/$rbdimage $mountpoint
fi
if [ "$1" == "u" ]; then
   umount $mountpoint
   rbd unmap /dev/rbd/$poolname/$rbdimage
fi

做成系統服務

[root@localhost ceph]# wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service
[root@localhost ceph]# systemctl daemon-reload
[root@localhost ceph]# systemctl enable rbd-mount.service

重啟查看自動掛載

[root@localhost ceph]# reboot -f
[root@localhost ceph]# df -h /mnt/ceph-disk1/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd1        10G  133M  9.9G   2% /mnt/ceph-disk1
[root@localhost ceph]# ll -h /mnt/ceph-disk1/
total 100M
-rw-r--r-- 1 root root 100M Feb 15 10:47 file1

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM