一、pg相關
1、xx objects unfound
- 問題描述:
dmesg查看磁盤發現讀寫異常,部分對象損壞(處於objects nofound狀態),集群處於ERR狀態
root@node1101:~# ceph health detail
HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 13/409798 objects unfound(0.003%);17 stuck requests are blocked > 4096 sec. Implicated osds 38
OSDMAP_FLAGS noscrub,nodeep-scrub flag(s) set
OBJECT_UNFOUND 13/409798 objects unfound (0.003%)
pg 5.309 has 1 unfound objects
pg 5.2da has 1 unfound objects
pg 5.2c9 has 1 unfound objects
pg 5.1e2 has 1 unfound objects
pg 5.6a has 1 unfound objects
pg 5.120 has 1 unfound objects
pg 5.148 has 1 unfound objects
pg 5.14b has 1 unfound objects
pg 5.160 has 1 unfound objects
pg 5.35b has 1 unfound objects
pg 5.39c has 1 unfound objects
pg 5.3ad has 1 unfound objects
REQUEST_STUCK 17 stuck requests are blocked > 4096 sec. Implicated osds 38
17 ops are blocked > 67108.9 sec
osd.38 has stuck requests > 67108.9 sec
- 處理措施:
將unfound pg強制刪除,參考命令:ceph pg {pgid} mark_unfound_lost delete
注:如需批量刪除unfound pg,則參考命令如下
for i in `ceph health detail | grep pg | awk '{print $2}'`;do ceph pg $i mark_unfound_lost delete;done
2、Reduced data availability: xx pgs inactive
- 問題描述:
磁盤出現讀寫異常,osd無法啟動,強制替換故障盤為新盤加入到集群,出現pgs inactive(unkown)
root@node1106:~# ceph -s
cluster:
id: 7f1aa879-afbb-4b19-9bc3-8f55c8ecbbb4
health: HEALTH_WARN
4 clients failing to respond to capability release
3 MDSs report slow metadata IOs
1 MDSs report slow requests
3 MDSs behind on trimming
noscrub,nodeep-scrub flag(s) set
Reduced data availability: 25 pgs inactive
6187 slow requests are blocked > 32 sec. Implicated osds 41
services:
mon: 3 daemons, quorum node1101,node1102,node1103
mgr: node1103(active), standbys: node1102, node1101
mds: ceph-3/3/3 up {0=node1103=up:active,1=node1102=up:active,2=node1104=up:active}, 2 up:standby
osd: 48 osds: 48 up, 48 in
flags noscrub,nodeep-scrub
data:
pools: 6 pools, 2888 pgs
objects: 474.95k objects, 94.5GiB
usage: 267GiB used, 202TiB / 202TiB avail
pgs: 0.866% pgs unknown
2863 active+clean
25 unknown
root@node1101:~# ceph pg dump_stuck inactive
ok
PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
1.166 unknown [] -1 [] -1
1.163 unknown [] -1 [] -1
1.26f unknown [] -1 [] -1
1.228 unknown [] -1 [] -1
1.213 unknown [] -1 [] -1
1.12f unknown [] -1 [] -1
1.276 unknown [] -1 [] -1
1.264 unknown [] -1 [] -1
1.32a unknown [] -1 [] -1
1.151 unknown [] -1 [] -1
1.20d unknown [] -1 [] -1
1.298 unknown [] -1 [] -1
1.306 unknown [] -1 [] -1
1.2f7 unknown [] -1 [] -1
1.2c8 unknown [] -1 [] -1
1.223 unknown [] -1 [] -1
1.204 unknown [] -1 [] -1
1.374 unknown [] -1 [] -1
1.b5 unknown [] -1 [] -1
1.b6 unknown [] -1 [] -1
1.2b unknown [] -1 [] -1
1.9f unknown [] -1 [] -1
1.2ac unknown [] -1 [] -1
1.78 unknown [] -1 [] -1
1.1c3 unknown [] -1 [] -1
1.1a unknown [] -1 [] -1
1.d9 unknown [] -1 [] -1
- 處理措施:
強制創建unkown pg,參考命令:ceph osd force-create-pg {pgid}
注:如需批量創建unkown pg,則參考命令如下:
for i in `ceph pg dump_stuck inactive | awk '{if (NR>2){print $1}}'`;do ceph osd force-create-pg $i;done
二、OSD相關
1、osd端口與其他服務固定綁定端口沖突
- 問題描述:
osd先行啟動,占用其他服務固定綁定端口,導致其他服務綁定端口失敗,無法啟動
- 處理措施:
考慮到其他服務涉及組件太多,擔心修改不完全導致其他問題發生,嘗試修改osd啟動端口范圍為其他服務之外
- 修改osd作為服務端的啟動端口范圍
ceph可通過ms_bind_port_min
和ms_bind_port_max
參數限制osd和mds守護進程使用端口范圍,默認范圍為6800:7300
設置端口使用范圍為9600:10000,追加參數設置至/etc/ceph/ceph.conf
文件中[global]
字段內
[root@node111 ~]# cat /etc/ceph/ceph.conf | grep ms_bind_port
ms_bind_port_min = 9600
ms_bind_port_max = 10000
[root@node111 ~]#
[root@node111 ~]# ceph --show-config | grep ms_bind_port
ms_bind_port_max = 10000
ms_bind_port_min = 9600
- 修改osd作為客戶端的啟動端口范圍
osd作為客戶端的啟動端口為隨機分配的,可通過內核去限制隨機端口分配范圍
默認端口范圍為1024:65000,修改端口范圍為7500:65000
--默認端口范圍為1024:65000
[root@node111 ~]# cat /proc/sys/net/ipv4/ip_local_port_range
1024 65000
--修改范圍為7500:65000
[root@node111 ~]# sed -i 's/net.ipv4.ip_local_port_range=1024 65000/net.ipv4.ip_local_port_range=7500 65000/g' /etc/sysctl.conf
[root@node111 ~]# sysctl -p
2、磁盤熱插拔,osd無法上線
- 問題描述:
使用bluestore部署ceph集群,對osd所在磁盤進行熱插拔操作,當重新插回之后,osd對應lvm不能自動恢復,導致osd無法上線成功
- 處理措施:
- 查找故障osd對應uuid:
ceph osd dump | grep {osd-id} | awk '{print $NF}'
參考示例:查找osd.0對應uuid
[root@node127 ~]# ceph osd dump | grep osd.0 | awk '{print $NF}'
57377809-fba4-4389-8703-f9603f16e60d
- 查找故障osd對應lvm路徑:
ls /dev/mapper/ | grep `ceph osd dump | grep {osd-id} | awk '{print $NF}' | sed 's/-/--/g'`
參考示例:通過uuid查找對應lvm路徑
注:由於lvm路徑對uuid做了處理,需要sed 's/-/--/g'`將-替換為--
[root@node127 ~]# ls /dev/mapper/ | grep `ceph osd dump | grep osd.0 | awk '{print $NF}' | sed 's/-/--/g'`
ceph--3182c42e--f8d8--4c13--ad92--987463d626c8-osd--block--57377809--fba4--4389--8703--f9603f16e60d
- 刪除故障osd對應lvm路徑
dmsetup remove /dev/mapper/{lvm-path}
參考示例:刪除故障osd對應lvm路徑
[root@node127 ~]# dmsetup remove /dev/mapper/ceph--3182c42e--f8d8--4c13--ad92--987463d626c8-osd--block--57377809--fba4--4389--8703--f9603f16e60d
- 重新激活所有lvm卷組
注:此時可以查看到對應故障osd的lvm信息
vgchange -ay
- 重新啟動osd使得osd上線
systemctl start ceph-volume@lvm-{osd-id}-{osd-uuid}
三、集群相關
1、clock skew detected on mon.node2
- 問題描述:
集群mon節點時間偏差過大,出現clock skew detected on mon.node2 告警信息
- 處理措施:
1、檢查集群mon節點時間偏差,使用chronyd時間進行集群時間同步
2、調大集群參數閾值,調整mon_clock_drift_allowed
參數值為2,調整mon_clock_drift_warn_backoff
參數值為30
sed -i "2 amon_clock_drift_allowed = 2" /etc/ceph/ceph.conf
sed -i "3 amon_clock_drift_warn_backoff = 30" /etc/ceph/ceph.conf
ceph tell mon.* injectargs '--mon_clock_drift_allowed 2'
ceph tell mon.* injectargs '--mon_clock_drift_warn_backoff 30'
注:相關參數說明如下:
[root@node147 ~]# ceph --show-config | grep mon_clock_drift
mon_clock_drift_allowed = 0.050000
--當mon節點之間時間偏移超過0.05秒,則不正常
mon_clock_drift_warn_backoff = 5.000000
--當出現5次偏移,則上報告警