CEPH之塊存儲


一、官方文檔

https://docs.ceph.com/en/latest/
http://docs.ceph.org.cn/rbd/rbd/

二、塊存儲

塊存儲簡稱(RADOS Block Device),是一種有序的字節序塊,也是Ceph三大存儲類型中最為常用的存儲方式,Ceph的塊存儲時基於RADOS的,因此它也借助RADOS的快照,復制和一致性等特性提供了快照,克隆和備份等操作。Ceph的塊設備值一種精簡置備模式,可以拓展塊存儲的大小且存儲的數據以條帶化的方式存儲到Ceph集群中的多個OSD中。

2.1、創建pool

官方文檔:http://docs.ceph.org.cn/rados/operations/pools/
# 1、 查看pool命令
[root@node1 ceph-deploy]# ceph osd lspools

# 2、首先得創建一個pool(名字為ceph-demo pg數量64  pgp數量64 副本數【replicated】默認是3)
[root@node1 ceph-deploy]# ceph osd pool create ceph-demo 64 64 
pool 'ceph-demo' created

# 3、查看pool
[root@node1 ceph-deploy]# ceph osd lspools
1 ceph-demo

# 4、查看pg、pgp、副本的數量
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo pg_num
pg_num: 64
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo pgp_num
pgp_num: 64
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo size
size: 3

# 5、查看調度算法
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo crush_rule
crush_rule: replicated_rule

# 6、調整就用set
[root@node1 ceph-deploy]# ceph osd pool set ceph-demo pg_num  128 
set pool 1 pg_num to 128
[root@node1 ceph-deploy]# ceph osd pool set ceph-demo pgp_num  128
set pool 1 pgp_num to 128

# 7、查看調整后的pg、pgp
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo pg_num  
pg_num: 128
[root@node1 ceph-deploy]# ceph osd pool get ceph-demo pgp_num  
pgp_num: 128

2.2、創建塊存儲文件

官方文檔:http://docs.ceph.org.cn/rbd/rados-rbd-cmds/
# 1、創建方式(2種方式都可)  -p 指定pool名稱、--image 指定image(塊名字)名字
[root@node1 ~]# rbd create -p ceph-demo --image rbd-demo.img --size 1G
[root@node1 ~]# rbd create ceph-demo/rbd-demo1.img --size 1G 

# 2、查看列表
[root@node1 ~]# rbd -p ceph-demo ls
rbd-demo.img
rbd-demo1.img

# 3、查看某個塊的信息(可以看到一個塊被分成了256個objects)
[root@node1 ~]# rbd info ceph-demo/rbd-demo.img
rbd image 'rbd-demo.img':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 14a48cb7303ce
        block_name_prefix: rbd_data.14a48cb7303ce
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten  # 等會把這幾個features都去掉
        op_features: 
        flags: 
        create_timestamp: Sun Jan 24 14:13:37 2021
        access_timestamp: Sun Jan 24 14:13:37 2021
        modify_timestamp: Sun Jan 24 14:13:37 2021

# 4、刪除塊
[root@node1 ~]#  rbd rm ceph-demo/rbd-demo1.img 
Removing image: 100% complete...done.

2.3、使用塊存儲文件

# 1、查看當前pool有幾個塊文件
[root@node1 ~]# rbd list  ceph-demo
rbd-demo.img

# 2、直接使用會報錯,因為內核級別的一些東西不支持
[root@node1 ~]# rbd map ceph-demo/rbd-demo.img
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable ceph-demo/rbd-demo.img object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

# 3、disable 模塊
[root@node1 ~]# rbd feature disable ceph-demo/rbd-demo.img deep-flatten
[root@node1 ~]# rbd feature disable ceph-demo/rbd-demo.img fast-diff
[root@node1 ~]# rbd feature disable ceph-demo/rbd-demo.img object-map
[root@node1 ~]# rbd feature disable ceph-demo/rbd-demo.img 

# 4、查看是否成功禁用
[root@node1 ~]# rbd info ceph-demo/rbd-demo.img
rbd image 'rbd-demo.img':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 14a48cb7303ce
        block_name_prefix: rbd_data.14a48cb7303ce
        format: 2
        features: layering   # 看到這里是layering狀態就可以測試掛載了
        op_features: 
        flags: 
        create_timestamp: Sun Jan 24 14:13:37 2021
        access_timestamp: Sun Jan 24 14:13:37 2021
        modify_timestamp: Sun Jan 24 14:13:37 2021
        
# 5、再次使用剛剛創建的塊存儲文件
[root@node1 ~]# rbd map ceph-demo/rbd-demo.img
/dev/rbd0

# 6、查看device
[root@node1 ~]# rbd device  list
id pool      namespace image        snap device    
0  ceph-demo           rbd-demo.img -    /dev/rbd0 (這就相當於我們本地的一塊磁盤一樣,可以進行分區格式化操作)

# 7、fdisk查看可以查看到相應信息
[root@node1 ~]# fdisk -l | grep rbd0
Disk /dev/rbd0: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

# 8、比如格式化
[root@node1 ~]# mkfs.ext4 /dev/rbd0

# 9、然后掛載
[root@node1 ~]# mkdir /mnt/rbd-demo
[root@node1 ~]# mount /dev/rbd0 /mnt/rbd-demo

# 10、df查看
[root@node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/rbd0                976M  2.6M  907M   1% /mnt/rbd-demo

2.4、塊存儲擴容

# 1、就拿之前創建的盤來操作
[root@node1 ~]# rbd -p ceph-demo ls
rbd-demo.img

# 2、查看它的信息
[root@node1 ~]# rbd -p ceph-demo info --image rbd-demo.img
rbd image 'rbd-demo.img':
        size 1 GiB in 256 objects   (目前是一個g)
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 14a48cb7303ce
        block_name_prefix: rbd_data.14a48cb7303ce
        format: 2
        features: layering
        op_features: 
        flags: 
        create_timestamp: Sun Jan 24 14:13:37 2021
        access_timestamp: Sun Jan 24 14:13:37 2021
        modify_timestamp: Sun Jan 24 14:13:37 2021

# 3、擴容(縮容也可,但是不建議)
[root@node1 ~]# rbd resize ceph-demo/rbd-demo.img --size 2G
Resizing image: 100% complete...done.

# 4、擴容后查看
[root@node1 ~]# rbd -p ceph-demo info --image rbd-demo.img
rbd image 'rbd-demo.img':
        size 2 GiB in 512 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 14a48cb7303ce
        block_name_prefix: rbd_data.14a48cb7303ce
        format: 2
        features: layering
        op_features: 
        flags: 
        create_timestamp: Sun Jan 24 14:13:37 2021
        access_timestamp: Sun Jan 24 14:13:37 2021
        modify_timestamp: Sun Jan 24 14:13:37 2021

# 5、此時的磁盤大小是擴上去了,可是文件系統掛載的是不會自動擴的
[root@node1 ~]# fdisk -l | grep rbd0
Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
[root@node1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/rbd0                976M  2.6M  907M   1% /mnt/rbd-demo  # 此處還是1個G

# 6、擴容文件系統(注意不建議對這種磁盤進行分區,雲上的也是一樣,建議多買幾塊)
[root@node1 ~]# blkid 
/dev/sr0: UUID="2018-11-25-23-54-16-00" LABEL="CentOS 7 x86_64" TYPE="iso9660" PTTYPE="dos" 
/dev/sdb: UUID="k4g1pw-rOvV-NG7w-ajnZ-qipH-kXwq-h0jY0o" TYPE="LVM2_member" 
/dev/sda1: UUID="ccb430ea-66c9-4c91-a4b4-ba870ca15943" TYPE="xfs" 
/dev/sda2: UUID="NegVJw-3XZn-BJeZ-NfKW-VROQ-roSa-LcKgoy" TYPE="LVM2_member" 
/dev/mapper/centos-root: UUID="59c5d6b6-e34d-4149-b28a-8a3b9c32536d" TYPE="xfs" 
/dev/sdc: UUID="h65OWm-ELDd-R5pO-HaQ0-Ejjs-cUWn-8Ejcpq" TYPE="LVM2_member" 
/dev/mapper/centos-swap: UUID="25f54a98-e472-4438-9641-eac952a46e3e" TYPE="swap" 
/dev/rbd0: UUID="911aadb8-bbf4-48d2-a62d-d86886af79dc" TYPE="ext4" 
# 擴它
[root@node1 ~]# resize2fs /dev/rbd0

2.5、RBD數據寫入流程

2.6、解決告警排查

# 1、發現問題
[root@node1 ~]# ceph -s
  cluster:
    id:     081dc49f-2525-4aaa-a56d-89d641cef302
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum node1,node2,node3 (age 3h)
    mgr: node2(active, since 3h), standbys: node3, node1
    osd: 3 osds: 3 up (since 3h), 3 in (since 13h)
 
  data:
    pools:   1 pools, 128 pgs
    objects: 22 objects, 38 MiB
    usage:   3.1 GiB used, 57 GiB / 60 GiB avail
    pgs:     128 active+clean

# 2、通過他提供的命令查看
[root@node1 ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'ceph-demo'   # 看這句
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.       # 這句是提示你怎么操作(就是把這個資源類型進行分類就行)
    
# 3、解決命令
[root@node1 ~]# ceph osd pool application enable ceph-demo rbd
enabled application 'rbd' on pool 'ceph-demo'

# 4、再次查看
[root@node1 ~]# ceph -s
  cluster:
    id:     081dc49f-2525-4aaa-a56d-89d641cef302
    health: HEALTH_OK


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM