ceph修復osd為down的情況


嘗試一、直接重新激活所有osd

1、查看osd樹

root@ceph01:~# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.29279 root default -2 0.14639 host ceph01 0 0.14639 osd.0 up 1.00000 1.00000 -3 0.14639 host ceph02 1 0.14639 osd.1 down 0 1.00000 

發現osd.1是down掉的。

2、再次激活所有的osd(記住是所有的,不只是down掉這一個)

下面命令當中的/dev/sdb1是每一個osd節點使用的實際存儲硬盤或分區。

ceph-deploy osd activate  ceph01:/dev/sdb1 ceph02:/dev/sdb1

3、查看osd樹和健康狀態

root@ceph01:~/my-cluster# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.29279 root default                                      
-2 0.14639     host ceph01                                   
 0 0.14639         osd.0        up  1.00000          1.00000 
-3 0.14639     host ceph02                                   
 1 0.14639         osd.1        up  1.00000          1.00000 
root@ceph01:~/my-cluster# 
root@ceph01:~/my-cluster# ceph -s
    cluster ecacda71-af9f-46f9-a2a3-a35c9e51db9e
     health HEALTH_OK
     monmap e1: 1 mons at {ceph01=10.111.131.125:6789/0}
            election epoch 14, quorum 0 ceph01
     osdmap e150: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v9284: 64 pgs, 1 pools, 17 bytes data, 3 objects
            10310 MB used, 289 GB / 299 GB avail
                  64 active+clean

只有為 HEALTH_OK 才算是正常的。

嘗試二、修復down掉的osd

該方法主要應用於某個osd物理損壞,導致激活不了

1、查看osd樹

root@ceph01:~# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.29279 root default                                      
-2 0.14639     host ceph01                                   
 0 0.14639         osd.0        up  1.00000          1.00000 
-3 0.14639     host ceph02                                   
 1 0.14639         osd.1      down        0          1.00000 

發現osd.1是down掉的。

2、將osd.1的狀態設置為out

root@ceph02:~# ceph osd out osd.1
osd.1 is already out. 

3、從集群中刪除

root@ceph02:~# ceph osd rm osd.1  
removed osd.1

4、從CRUSH中刪除

root@ceph02:~# ceph osd crush rm osd.1 
removed item id 1 name 'osd.1' from crush map

5、刪除osd.1的認證信息

root@ceph02:~# ceph auth del osd.1
updated

6、umount

umount /dev/sdb1

 

7、再次查看osd的集群狀態

root@ceph02:~# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.14639 root default                                      
-2 0.14639     host ceph01                                   
 0 0.14639         osd.0        up  1.00000          1.00000 
-3       0     host ceph02    

8、登錄ceph-deploy節點

root@ceph01:~# cd /root/my-cluster/
root@ceph01:~/my-cluster# 

9、初始化磁盤

ceph-deploy --overwrite-conf osd  prepare ceph02:/dev/sdb1

10、再次激活所有的osd(記住是所有的,不只是down掉這一個)

ceph-deploy osd activate  ceph01:/dev/sdb1 ceph02:/dev/sdb1

11、查看osd樹和健康狀態

root@ceph01:~/my-cluster# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.29279 root default -2 0.14639 host ceph01 0 0.14639 osd.0 up 1.00000 1.00000 -3 0.14639 host ceph02 1 0.14639 osd.1 up 1.00000 1.00000 root@ceph01:~/my-cluster# 
root@ceph01:~/my-cluster# ceph -s
    cluster ecacda71-af9f-46f9-a2a3-a35c9e51db9e health HEALTH_OK monmap e1: 1 mons at {ceph01=10.111.131.125:6789/0} election epoch 14, quorum 0 ceph01 osdmap e150: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v9284: 64 pgs, 1 pools, 17 bytes data, 3 objects 10310 MB used, 289 GB / 299 GB avail 64 active+clean

只有為 HEALTH_OK 才算是正常的。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM