之前測試用ceph總是警告
health HEALTH_WARN pool cephfs_metadata2 has many more objects per pg than average (too few pgs?) pool cephfs_data2 has many more objects per pg than average (too few pgs?)
查看pg數
[root@node1 ~]# ceph osd pool get cephfs_metadata2 pg_num pg_num: 8 [root@node1 ~]# ceph osd pool get cephfs_metadata2 pgp_num pgp_num: 8
突然想起來當時只是測試安裝,而且說pg數可以增加但不能減少,所以只是隨便設置一個數。再設置回來即可。
[root@node1 ~]# ceph osd pool set cephfs_metadata2 pg_num 256 Error E2BIG: specified pg_num 256 is too large (creating 248 new PGs on ~3 OSDs exceeds per-OSD max of 32)
結果出現這個錯誤,參考“http://www.selinuxplus.com/?p=782”,原來是一次增加的數量有限制。最后選擇用暴力的方法解決問題:
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 40 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 72 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 104 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 136 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 168 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 200 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 232 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 256 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 40 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 72 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 104 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 136 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 168 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 200 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 232 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 256
過了大概半個小時,集群就正常了。