ceph 停機 重啟


零 修改記錄

一 摘要

二 環境信息

三 實施

(一)實施

3.1.1 實施前檢查

[root@cephtest001 ~]# su - cephadmin
上一次登錄:五 2月 19 15:01:46 CST 2021pts/0 上
[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 12d)
    mgr: cephtest001(active, since 9w), standbys: cephtest002, cephtest004
    osd: 13 osds: 13 up (since 13d), 13 in (since 3w); 1 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.34k objects, 167 GiB
    usage:   518 GiB used, 26 TiB / 27 TiB avail
    pgs:     30/73014 objects misplaced (0.041%)
             399 active+clean
             1   active+clean+remapped

  io:
    client:   89 KiB/s rd, 99 op/s rd, 0 op/s wr

3.1.2 關閉ceph osd集群流量(部署節點)

[cephadmin@cephtest001 ~]$ ceph osd set noout
noout is set
[cephadmin@cephtest001 ~]$ ceph osd set norecover
norecover is set
[cephadmin@cephtest001 ~]$ ceph osd set norebalance
norebalance is set
[cephadmin@cephtest001 ~]$ ceph osd set nobackfill
nobackfill is set
[cephadmin@cephtest001 ~]$ ceph osd set nodown
nodown is set
[cephadmin@cephtest001 ~]$ ceph osd set pause
pauserd,pausewr is set

檢查

[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            pauserd,pausewr,nodown,noout,nobackfill,norebalance,norecover flag(s) set
            1 pools have many more objects per pg than average

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 12d)
    mgr: cephtest001(active, since 9w), standbys: cephtest002, cephtest004
    osd: 13 osds: 13 up (since 13d), 13 in (since 3w); 1 remapped pgs
         flags pauserd,pausewr,nodown,noout,nobackfill,norebalance,norecover
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.34k objects, 167 GiB
    usage:   518 GiB used, 26 TiB / 27 TiB avail
    pgs:     30/73014 objects misplaced (0.041%)
             399 active+clean
             1   active+clean+remapped

[cephadmin@cephtest001 ~]$

恢復

[cephadmin@cephtest001 ~]$ ceph osd unset noout
noout is unset
[cephadmin@cephtest001 ~]$ ceph osd unset norecover
norecover is unset
[cephadmin@cephtest001 ~]$ ceph osd unset norebalance
norebalance is unset
[cephadmin@cephtest001 ~]$ ceph osd unset nobackfill
nobackfill is unset
[cephadmin@cephtest001 ~]$ ceph osd unset nodown
nodown is unset
[cephadmin@cephtest001 ~]$ ceph osd unset pause
pauserd,pausewr is unset
[cephadmin@cephtest001 ~]$

檢查

[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
            3 monitors have not enabled msgr2

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 2h)
    mgr: cephtest001(active, since 2h), standbys: cephtest002, cephtest004
    osd: 13 osds: 13 up (since 2h), 13 in (since 3w); 1 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.34k objects, 167 GiB
    usage:   518 GiB used, 26 TiB / 27 TiB avail
    pgs:     30/73014 objects misplaced (0.041%)
             399 active+clean
             1   active+clean+remapped

[cephadmin@cephtest001 ~]$


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM