ceph osd tree
ceph osd pool ls
ceph osd pool ls detail
ceph fs ls
ceph pg dump
查看及设置副本数
ceph osd lspools #打印pool列表 ceph osd pool get cephfs_data size #查看存储池副本数 ceph osd pool set cephfs_data size 3 #设置存储池副本数
object id
(1)查文件对应的object id
cephfs /mnt/cephfs/ssd/file.txt map
(2)查看object id所在的osd
ceph osd map cephfs_data_ssd 100001ae68f.00000000 (3)查看pool所有的object id rados ls -p cephfs_data_ssd |grep "100001ae690" |sort
(4)查看块大小: rados -p cephfs_data get 10000006977.00000943 /tmp/ss.log # 导出块 ll -lh /tmp/ss.log -rw-r--r-- 1 root root 4.0M Jun 21 10:37 /tmp/ss.log # 查看导出的内容
crush rule
(1)查看 ceph osd crush rule ls #获取 出crush rule 列表 ceph osd crush rule dump # 查看 crush rule详细 ceph osd crush tree # 查看crush device map ceph osd crush dump # cursh rule + device map, detail (2)导出 crushmap ceph osd getcrushmap -o /tmp/crushmap_1029 # dump crushmap crushtool -d /tmp/crushmap_1029 -o /tmp/dcrushmap_1029 # 反编译crushmap (解析) (3)导入 crushmap crushtool -d /tmp/dcrushmap_1029.new -o /tmp/crushmap_1029.new # 编译crushmap ceph osd setcrushmap -i /tmp/crushmap_1029.new #导入crushmap
bucket
ceph osd crush add-bucket ssd_host2 host # 新增 bucekt,名字ssd_host2,type为host ceph osd crush move osd.8 host=ssd_host2 # 将osd移到指定的 bucket中 ceph osd crush add osd.8 0.00980 host=host_ssd #将osd加入到指定的bucket中,并指定权重