ceph osd tree
ceph osd pool ls
ceph osd pool ls detail
ceph fs ls
ceph pg dump
查看及設置副本數
ceph osd lspools #打印pool列表 ceph osd pool get cephfs_data size #查看存儲池副本數 ceph osd pool set cephfs_data size 3 #設置存儲池副本數
object id
(1)查文件對應的object id
cephfs /mnt/cephfs/ssd/file.txt map
(2)查看object id所在的osd
ceph osd map cephfs_data_ssd 100001ae68f.00000000 (3)查看pool所有的object id rados ls -p cephfs_data_ssd |grep "100001ae690" |sort
(4)查看塊大小: rados -p cephfs_data get 10000006977.00000943 /tmp/ss.log # 導出塊 ll -lh /tmp/ss.log -rw-r--r-- 1 root root 4.0M Jun 21 10:37 /tmp/ss.log # 查看導出的內容
crush rule
(1)查看 ceph osd crush rule ls #獲取 出crush rule 列表 ceph osd crush rule dump # 查看 crush rule詳細 ceph osd crush tree # 查看crush device map ceph osd crush dump # cursh rule + device map, detail (2)導出 crushmap ceph osd getcrushmap -o /tmp/crushmap_1029 # dump crushmap crushtool -d /tmp/crushmap_1029 -o /tmp/dcrushmap_1029 # 反編譯crushmap (解析) (3)導入 crushmap crushtool -d /tmp/dcrushmap_1029.new -o /tmp/crushmap_1029.new # 編譯crushmap ceph osd setcrushmap -i /tmp/crushmap_1029.new #導入crushmap
bucket
ceph osd crush add-bucket ssd_host2 host # 新增 bucekt,名字ssd_host2,type為host ceph osd crush move osd.8 host=ssd_host2 # 將osd移到指定的 bucket中 ceph osd crush add osd.8 0.00980 host=host_ssd #將osd加入到指定的bucket中,並指定權重