一.PG處於異常狀態active+undersized+degraded
部署環境: 自己搭建的3節點集群,集群共5個OSD,部署Ceph的RadosGW的服務時,副本默認設置為3,集群存放數據量少。集群狀態處於如下,無法自己恢復:
[root@k8s-01 ceph]# ceph -s
cluster:
id: dd1a1ab2-0f34-4936-bc09-87bd40ef5ca0
health: HEALTH_WARN
Degraded data redundancy: 183/4019 objects degraded (4.553%), 15 pgs degraded, 16 pgs undersized
services:
mon: 3 daemons, quorum k8s-01,k8s-02,k8s-03
mgr: k8s-01(active)
mds: cephfs-2/2/2 up {0=k8s-01=up:active,1=k8s-03=up:active}, 1 up:standby
osd: 5 osds: 5 up, 5 in
rgw: 3 daemons active
data:
pools: 6 pools, 288 pgs
objects: 1.92 k objects, 1020 MiB
usage: 7.5 GiB used, 342 GiB / 350 GiB avail
pgs: 183/4019 objects degraded (4.553%)
272 active+clean
15 active+undersized+degraded
1 active+undersized
分析上面兩個狀態顯示的具體含義:
-
undersized
PG當前的Acting Set小於存儲池副本數 -
degraded
降級狀態,Peering完成后,PG檢測到任意一個PG實例存在不一致(需要被同步/修復)的對象,或者當前ActingSet小於存儲池副本數。
以我們的集群為例說明,這個狀態降級的集群可以正常讀寫數據,undersized是當前存活的PG副本數為2,小於副本數3.將其做此標記,表明存數據副本數不足。
針對我們的集群,我們查看下集群的詳細信息:
[root@k8s-01 ceph]# ceph health detail
HEALTH_WARN Degraded data redundancy: 183/4017 objects degraded (4.556%), 15 pgs degraded, 16 pgs undersized
PG_DEGRADED Degraded data redundancy: 183/4017 objects degraded (4.556%), 15 pgs degraded, 16 pgs undersized
pg 4.0 is stuck undersized for 782.927699, current state active+undersized+degraded, last acting [0,3]
pg 4.1 is stuck undersized for 782.930776, current state active+undersized+degraded, last acting [1,4]
pg 4.2 is stuck undersized for 782.930141, current state active+undersized, last acting [1,3]
pg 4.3 is stuck undersized for 782.929428, current state active+undersized+degraded, last acting [0,4]
pg 4.4 is stuck undersized for 782.931202, current state active+undersized+degraded, last acting [1,3]
pg 4.5 is stuck undersized for 782.931627, current state active+undersized+degraded, last acting [3,0]
pg 4.6 is stuck undersized for 782.931584, current state active+undersized+degraded, last acting [0,4]
pg 4.7 is stuck undersized for 782.928895, current state active+undersized+degraded, last acting [1,3]
pg 6.0 is stuck undersized for 766.814237, current state active+undersized+degraded, last acting [0,4]
pg 6.1 is stuck undersized for 766.818367, current state active+undersized+degraded, last acting [1,3]
pg 6.2 is stuck undersized for 766.819767, current state active+undersized+degraded, last acting [1,3]
pg 6.3 is stuck undersized for 766.822230, current state active+undersized+degraded, last acting [4,2]
pg 6.4 is stuck undersized for 766.821061, current state active+undersized+degraded, last acting [1,4]
pg 6.5 is stuck undersized for 766.826778, current state active+undersized+degraded, last acting [3,0]
pg 6.6 is stuck undersized for 766.818134, current state active+undersized+degraded, last acting [1,3]
pg 6.7 is stuck undersized for 766.826165, current state active+undersized+degraded, last acting [3,2]
[root@k8s-01 ceph]#
分析:這兩種狀態一般同時出現,表明有些PG沒有滿足設定的replicas數量要求。由上面的信息我們分析PG 4.0顯示只有兩個拷貝,分別在OSD.0和OSD.3上,分析其原因可能是我們的集群副本數無法滿足。我們的集群主機是以host為故障域,所以我們設置副本數為3時,需要保證每個節點都必須存在OSD,結果查看集群的osd tree發現只有兩個節點有OSD,另外一個節點不存在OSD,這里我們設置存儲池的副本為2,查看集群是否正常。
# 查看ceph的OSD磁盤大小
[root@k8s-01 ceph]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 0.09769 1.00000 100 GiB 1.5 GiB 98 GiB 1.50 0.70 118
1 hdd 0.09769 1.00000 100 GiB 1.4 GiB 99 GiB 1.42 0.66 123
2 hdd 0.04880 1.00000 50 GiB 1.5 GiB 48 GiB 3.00 1.40 47
3 hdd 0.04880 1.00000 50 GiB 1.6 GiB 48 GiB 3.14 1.46 151
4 hdd 0.04880 1.00000 50 GiB 1.5 GiB 48 GiB 3.04 1.42 137
TOTAL 350 GiB 7.5 GiB 342 GiB 2.14
MIN/MAX VAR: 0.66/1.46 STDDEV: 0.83
# 查看ceph的OSD tree
[root@k8s-01 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.34177 root default
-3 0.24417 host k8s-01
0 hdd 0.09769 osd.0 up 1.00000 1.00000
1 hdd 0.09769 osd.1 up 1.00000 1.00000
2 hdd 0.04880 osd.2 up 1.00000 1.00000
-5 0.09760 host k8s-02
3 hdd 0.04880 osd.3 up 1.00000 1.00000
4 hdd 0.04880 osd.4 up 1.00000 1.00000
- 設置存儲池的副本數為2
[root@k8s-01 ceph]# ceph osd pool set default.rgw.log size 2
[root@k8s-01 ceph]# ceph osd dump
epoch 130
fsid dd1a1ab2-0f34-4936-bc09-87bd40ef5ca0
created 2019-06-03 19:22:38.483687
modified 2019-07-08 18:14:50.342952
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 11
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release mimic
pool 1 'datapool' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 48 flags hashpspool stripe_width 0 application cephfs
pool 2 'metapool' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 52 flags hashpspool stripe_width 0 application cephfs
pool 3 '.rgw.root' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 121 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.control' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 127 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.meta' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 126 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.log' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 129 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
max_osd 5
- 查看集群狀態
[root@k8s-01 ceph]# ceph -s
cluster:
id: dd1a1ab2-0f34-4936-bc09-87bd40ef5ca0
health: HEALTH_OK
services:
mon: 3 daemons, quorum k8s-01,k8s-02,k8s-03
mgr: k8s-01(active)
mds: cephfs-2/2/2 up {0=k8s-01=up:active,1=k8s-03=up:active}, 1 up:standby
osd: 5 osds: 5 up, 5 in
rgw: 3 daemons active
data:
pools: 6 pools, 288 pgs
objects: 1.92 k objects, 1019 MiB
usage: 7.5 GiB used, 342 GiB / 350 GiB avail
pgs: 288 active+clean
io:
client: 339 B/s rd, 11 KiB/s wr, 0 op/s rd, 1 op/s wr
至此我們發現集群狀態已經恢復。
二.cephfs掛載點執行df -h卡死
- 查看卡死的客戶端ID
[root@k8s-02 ~]# ceph daemon mds.k8s-02 session ls
{
"id": 44272,
"num_leases": 0,
"num_caps": 1,
"state": "open",
"request_load_avg": 0,
"uptime": 230.958432,
"replay_requests": 0,
"completed_requests": 0,
"reconnecting": false,
"inst": "client.44272 192.168.50.11:0/2167123153",
"client_metadata": {
"features": "00000000000001ff",
"ceph_sha1": "cbff874f9007f1869bfd3821b7e33b2a6ffd4988",
"ceph_version": "ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)",
"entity_id": "admin",
"hostname": "k8s-02",
"mount_point": "/mnt/ceph",
"pid": "1904636",
- 執行evict
# 這里evict后面是客戶端的ID
[root@k8s-02 ~]# cep daemon mds.k8s-02 session evict 44272
- umount -l 卡死的掛載點
[root@k8s-02 ~]# umount -l /mnt/ceph
[root@k8s-02 ~]# ceph-fuse /mnt/ceph