016 Ceph的集群管理_2


一、Ceph集群的運行狀態

集群狀態:HEALTH_OK,HEALTH_WARN,HEALTH_ERR

1.1 常用查尋狀態指令

[root@ceph2 ~]#    ceph health detail

HEALTH_OK

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

ceph -w是一樣的,但是出於交互狀態,可以試試更新

1.2 集群標志

        noup:OSD啟動時,會將自己在MON上標識為UP狀態,設置該標志位,則OSD不會被自動標識為up狀態

        nodown:OSD停止時,MON會將OSD標識為down狀態,設置該標志位,則MON不會將停止的OSD標識為down狀態,設置noup和nodown可以防止網絡抖動

        noout:設置該標志位,則mon不會從crush映射中刪除任何OSD。對OSD作維護時,可設置該標志位,以防止CRUSH在OSD停止時自動重平衡數據。OSD重新啟動時,需要清除該flag

        noin:設置該標志位,可以防止數據被自動分配到OSD上

        norecover:設置該flag,禁止任何集群恢復操作。在執行維護和停機時,可設置該flag

        nobackfill:禁止數據回填

        noscrub:禁止清理操作。清理PG會在短期內影響OSD的操作。在低帶寬集群中,清理期間如果OSD的速度過慢,則會被標記為down。可以該標記來防止這種情況發生

        nodeep-scrub:禁止深度清理

        norebalance:禁止重平衡數據。在執行集群維護或者停機時,可以使用該flag

        pause:設置該標志位,則集群停止讀寫,但不影響osd自檢

        full:標記集群已滿,將拒絕任何數據寫入,但可讀

1.3 集群flag操作

只能對整個集群操作,不能針對單個osd

設置為noout狀態

[root@ceph2 ~]# ceph osd set noout

noout is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_WARN
noout flag(s) set

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
flags noout
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 409 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph2 ~]# ceph osd unset noout

noout is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 2558 B/s rd, 0 B/s wr, 2 op/s rd, 0 op/s wr

[root@ceph2 ~]# ceph osd set full

full is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_WARN
full flag(s) set

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
flags full
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1768 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 2558 B/s rd, 0 B/s wr, 2 op/s rd, 0 op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

2019-03-27 21:59:14.250208 7f6500913e40 0 client.65175.objecter FULL, paused modify 0x55d690a412b0 tid 0

[root@ceph2 ~]# ceph osd unset full

full is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1765 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 409 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

[root@ceph2 ~]# rados -p ssdpool ls

testfull
test

二、限制Pool配置更改

2.1 主要過程

禁止池被刪除

osd_pool_default_flag_nodelete

禁止池的pg_num和pgp_num被修改

osd_pool_default_flag_nopgchange

禁止修改池的size和min_size

osd_pool_default_flag_nosizechang

2.2 實驗操作

[root@ceph2 ~]# ceph daemon osd.0  config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
  "osd_pool_default_flag_nodelete": "false",
  "osd_pool_default_flag_nopgchange": "false",
  "osd_pool_default_flag_nosizechange": "false",
  "osd_pool_default_flags": "0",

 

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete true

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
  "osd_pool_default_flag_nodelete": "true",
  "osd_pool_default_flag_nopgchange": "false",
  "osd_pool_default_flag_nosizechange": "false",
  "osd_pool_default_flags": "0",

[root@ceph2 ~]# ceph osd pool delete ssdpool  ssdpool yes-i-really-really-mean-it

Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool ssdpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.   #不能刪除

改為false

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete false

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "true",                   #依然顯示為ture
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": "0"

2.3 使用配置文件修改

在ceph1上修改

osd_pool_default_flag_nodelete false

[root@ceph1 ~]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-mon.target'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-osd.target'

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "false",
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
 "osd_pool_default_flags": "0",

刪除ssdpool

[root@ceph2 ~]# ceph osd pool delete ssdpool ssdpool --yes-i-really-really-mean-it

成功刪除!!!

三、理解PG

3.1 PG的狀態

        Creating:PG正在被創建。通常當存儲池被創建或者PG的數目被修改時,會出現這種狀態

        Active:PG處於活躍狀態。可被正常讀寫

        Clean:PG中的所有對象都被復制了規定的副本數

        Down:PG離線

        Replay:當某個OSD異常后,PG正在等待客戶端重新發起操作

        Splitting:PG正在初分割,通常在一個存儲池的PG數增加后出現,現有的PG會被分割,部分對象被移動到新的PG

        Scrubbing:PG正在做不一致校驗

        Degraded:PG中部分對象的副本數未達到規定數目

        Inconsistent:PG的副本出現了不一致。如果出現副本不一致,可使用ceph pg repair來修復不一致情況

        Peering:Perring是由主OSD發起的使用存放PG副本的所有OSD就PG的所有對象和元數據的狀態達成一致的過程。Peering完成后,主OSD才會接受客戶端寫請求

        Repair:PG正在被檢查,並嘗試修改被發現的不一致情況

        Recovering:PG正在遷移或同步對象及副本。通常是一個OSD down掉之后的重平衡過程

        Backfill:一個新OSD加入集群后,CRUSH會把集群現有的一部分PG分配給它,被稱之為數據回填

        Backfill-wait:PG正在等待開始數據回填操作

        Incomplete:PG日志中缺失了一關鍵時間段的數據。當包含PG所需信息的某OSD不可用時,會出現這種情況

        Stale:PG處理未知狀態。monitors在PG map改變后還沒收到過PG的更新。集群剛啟動時,在Peering結束前會出現該狀態

        Remapped:當PG的acting set變化后,數據將會從舊acting set遷移到新acting set。新主OSD需要一段時間后才能提供服務。因此這會讓老的OSD繼續提供服務,直到PG遷移完成。在這段時間,PG狀態就會出現Remapped

3.2  管理文件到PG的映射

[root@ceph2 ~]# ceph osd map test test

osdmap e288 pool 'test' (16) object 'test' -> pg 16.40e8aab5 (16.15) -> up ([5,6], p5) acting ([5,6,0], p5)
test對象所在pg id為16.15,存儲在三個osd上,分別為osd.5、osd.5和osd.0,其中osd.5為primary osd
處於up狀態的osd會一直留在PG的up set和acting set中,一旦主osd down,它首先會從up set中移除,然后從acting set中移除,之后從OSD將被升級為主。Ceph會將故障OSD上的PG恢復到一個新OSD上,然后再將這個新OSD加入到up和acting set中來維持集群的高可用性

3.3 管理stuck的狀態PG

        如果PG長時間(mon_pg_stuck_threshold,默認為300s)出現如下狀態時,MON會將該PG標記為stuck:

        inactive:pg有peering問題

        unclean:pg在故障恢復時遇到問題

        stale:pg沒有任何OSD報告,可能其所有的OSD都是down和out

        undersized:pg沒有充足的osd來存儲它應具有的副本數

        默認情況下,Ceph會自動執行恢復,但如果未成自動恢復,則集群狀態會一直處於HEALTH_WARN或者HEALTH_ERR

        如果特定PG的所有osd都是down和out狀態,則PG會被標記為stale。要解決這一情況,其中一個OSD必須要重生,且具有可用的PG副本,否則PG不可用

        Ceph可以聲明osd或PG已丟失,這也就意味着數據丟失。

        需要說明的是,osd的運行離不開journal,如果journal丟失,則osd停止

3.4 stuck的狀態pg操作

檢查處於stuck狀態的pg

[root@ceph2 ceph]# ceph pg dump_stuck

ok
PG_STAT STATE         UP    UP_PRIMARY ACTING ACTING_PRIMARY 
17.5    stale+peering [0,2]          0  [0,2]              0 
17.4    stale+peering [2,0]          2  [2,0]              2 
17.3    stale+peering [2,0]          2  [2,0]              2 
17.2    stale+peering [2,0]          2  [2,0]              2 
17.1    stale+peering [0,2]          0  [0,2]              0 
17.0    stale+peering [2,0]          2  [2,0]              2 
17.1f   stale+peering [2,0]          2  [2,0]              2 
17.1e   stale+peering [0,2]          0  [0,2]              0 
17.1d   stale+peering [2,0]          2  [2,0]              2 
17.1c   stale+peering [0,2]          0  [0,2]              0 
17.6    stale+peering [2,0]          2  [2,0]              2 
17.11   stale+peering [0,2]          0  [0,2]              0 
17.7    stale+peering [2,0]          2  [2,0]              2 
17.8    stale+peering [2,0]          2  [2,0]              2 
17.13   stale+peering [2,0]          2  [2,0]              2 
17.9    stale+peering [0,2]          0  [0,2]              0 
17.10   stale+peering [2,0]          2  [2,0]              2 
17.a    stale+peering [0,2]          0  [0,2]              0 
17.15   stale+peering [2,0]          2  [2,0]              2 
17.b    stale+peering [2,0]          2  [2,0]              2 
17.12   stale+peering [0,2]          0  [0,2]              0 
17.c    stale+peering [2,0]          2  [2,0]              2 
17.17   stale+peering [0,2]          0  [0,2]              0 
17.d    stale+peering [2,0]          2  [2,0]              2 
17.14   stale+peering [2,0]          2  [2,0]              2 
17.e    stale+peering [0,2]          0  [0,2]              0 
17.19   stale+peering [0,2]          0  [0,2]              0 
17.f    stale+peering [2,0]          2  [2,0]              2 
17.16   stale+peering [0,2]          0  [0,2]              0 
17.18   stale+peering [0,2]          0  [0,2]              0 
17.1a   stale+peering [2,0]          2  [2,0]              2 
17.1b   stale+peering [2,0]          2  [2,0]              2
[root@ceph2 ceph]# ceph osd blocked-by
osd num_blocked 
  0          19 
  2          13 

檢查導致pg一致阻塞在peering狀態的osd

ceph osd blocked-by

檢查某個pg的狀態

ceph pg dump |grep pgid

聲明pg丟失

ceph pg pgid mark_unfound_lost revert|delete

聲明osd丟失(需要osd狀態為down且out)

ceph osd lost osdid --yes-i-really-mean-it


 博主聲明:本文的內容來源主要來自譽天教育晏威老師,由本人實驗完成操作驗證,需要的博友請聯系譽天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉載,謝謝!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM