health: HEALTH_WARN
too few PGs per OSD 錯誤
ceph -s
cluster:
id: da54ea6a-111a-434c-b78e-adad1ac66abb
health: HEALTH_WARN
too few PGs per OSD (10 < min 30)
services:
mon: 3 daemons, quorum master1,master2,master3
mgr: master1(active), standbys: master2
osd: 3 osds: 3 up, 3 in
data:
pools: 1 pools, 10 pgs
objects: 17 objects, 24 MiB
usage: 3.0 GiB used, 24 GiB / 27 GiB avail
pgs: 10 active+clean
從上面可以看到,提示說每個osd上的pg數量小於最小的數目30個。pgs為10,因為是2副本的配置,所以當有3個osd的時候,每個osd上均分了10/3 *2=6個pgs,也就是出現了如上的錯誤 小於最小配置30個。
集群這種狀態如果進行數據的存儲和操作,會發現集群卡死,無法響應io,同時會導致大面積的osd down。
- 修改默認pool rbd的pgs
ceph osd pool set rbd pg_num 50
此時,ceph -s 查看會提示pg_num 大於 pgp_num,所以還需要修改pgp_num
ceph osd pool set rbd pgp_num 50
再次查看:
ceph -s
cluster:
id: da54ea6a-111a-434c-b78e-adad1ac66abb
health: HEALTH_WARN
application not enabled on 1 pool(s)
- 提示 application not enabled
[root@master1 ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'rbd'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
[root@master1 ~]# ceph osd pool application enable rbd rbd
enabled application 'rbd' on pool 'rbd'
根據上面的提示,選擇存儲方式:cephfs ,rbd ,rgw 。此處我是rbd塊存儲。
查看:
root@master1 ~]# ceph -s
cluster:
id: da54ea6a-111a-434c-b78e-adad1ac66abb
health: HEALTH_OK
services:
mon: 3 daemons, quorum master1,master2,master3
mgr: master1(active), standbys: master2
osd: 3 osds: 3 up, 3 in
data:
pools: 1 pools, 50 pgs
objects: 17 objects, 24 MiB
usage: 3.0 GiB used, 24 GiB / 27 GiB avail
pgs: 50 active+clean
- HEALTH_WARN application not enabled on pool '.rgw.root'
在創建rgw對象存儲時,報錯:
root@master1 ceph-cluster]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool '.rgw.root'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
[root@master1 ceph-cluster]# ceph osd pool application enable .rgw.root rgw
enabled application 'rgw' on pool '.rgw.root'
ceph -s 查看已恢復正常
- too many PGs per OSD告警的處理
ceph自L版本后,mon_max_pg_per_osd,默認值也從300變更為200
修改ceph.conf文件
[root@master1 ceph-cluster]# cat ceph.conf
[global]
......
mon_max_pg_per_osd = 1000
在global下面追加一行。
推送到各節點:
ceph-deploy --overwrite-conf config push master1 master2 master3
重啟mgr和各節點mon
systemctl restart ceph-mgr@master1
systemctl restart ceph-mon@master1
systemctl restart ceph-mon@master2
systemctl restart ceph-mon@master3
ceph -s 查看已恢復正常
一般ceph.conf在不修改配置文件的情況下,好的參數都使用的默認配置,此時創建rgw用戶時也會報錯:
rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
[root@master1 ceph-cluster]# radosgw-admin user create --uid=radosgw --display-name='radosgw'
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000, #最大值1000已生效
"auid": 0,
"subusers": [],
......
解決辦法同上,修改默認配置文件后重啟即可。
ceph ERROR問題處理
安裝ceph時報ERROR
- ceph在安裝時如果報ERROR錯誤,一般兩種原因:
1,缺少安裝所需的依賴包;
2,repo源問題,無法正常下載。
解決以上兩個問題,請參考ceph安裝文檔進行解決。 - No data was received after 300 seconds, disconnecting...
網絡超時,在各節點使用yum方式安裝ceph解決
yum -y install ceph - 出現Error:over-write
出現這種情況一般是修改了ceph.conf沒生效。解決辦法:
ceph-deploy --overwrite-conf config push node1-4
或者
ceph-deploy --overwrite-conf mon create node1-4 - 出現Error:[Errno 2] No such file or directory
如果在安裝時就出現這種情況,可能是卸載過ceph,但沒刪除干凈。
ceph在卸載后需要刪除以下目錄:
rm -rf /etc/ceph/*
rm -rf /var/lib/ceph/*
rm -rf /var/log/ceph/*
rm -rf /var/run/ceph/*
ceph在使用了一段時間后,如果卸載重新安裝,會繼續保留原來的數據,所以必須要刪除這些數據目錄才行。