ceph創建pool后100.000% pgs not active


原文:https://www.cnblogs.com/zyxnhr/p/10553717.html

1、沒有創建pool之前

[root@cluster9 ceph-cluster]# ceph -s
  cluster:
    id:     d81b3ce4-bcbc-4b43-870e-430950652315
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cluster9
    mgr: cluster9(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   3.06GiB used, 10.9TiB / 10.9TiB avail
    pgs:     

2、創建pool后

[root@cluster9 ceph-cluster]# ceph -s
  cluster:
    id:     d81b3ce4-bcbc-4b43-870e-430950652315
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cluster9
    mgr: cluster9(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 0 objects, 0B
    usage:   3.06GiB used, 10.9TiB / 10.9TiB avail
    pgs:     100.000% pgs not active 128 undersized+peered

3、修改osd級別

[root@cluster9 ceph-cluster]# cd /etc/ceph/
[root@cluster9 ceph]# ceph osd getcrushmap -o /etc/ceph/crushmap
18
[root@cluster9 ceph]# crushtool -d /etc/ceph/crushmap -o /etc/ceph/crushmap.txt
[root@cluster9 ceph]# sed -i 's/step chooseleaf firstn 0 type host/step chooseleaf firstn 0 type osd/' /etc/ceph/crushmap.txt
[root@cluster9 ceph]# grep 'step chooseleaf' /etc/ceph/crushmap.txt
    step chooseleaf firstn 0 type osd
[root@cluster9 ceph]# crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap-new
[root@cluster9 ceph]# ceph osd setcrushmap -i /etc/ceph/crushmap-new
19

4、再次查看ceph狀態

[root@cluster9 ceph]# ceph -s
  cluster:
    id:     d81b3ce4-bcbc-4b43-870e-430950652315
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum cluster9
    mgr: cluster9(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 0 objects, 0B
    usage:   3.06GiB used, 10.9TiB / 10.9TiB avail
    pgs:     128 active+clean

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM