ceph问题总结


之前测试用ceph总是警告

     health HEALTH_WARN
            pool cephfs_metadata2 has many more objects per pg than average (too few pgs?)
            pool cephfs_data2 has many more objects per pg than average (too few pgs?)

查看pg数

[root@node1 ~]# ceph osd pool get cephfs_metadata2 pg_num
pg_num: 8
[root@node1 ~]# ceph osd pool get cephfs_metadata2 pgp_num
pgp_num: 8

突然想起来当时只是测试安装,而且说pg数可以增加但不能减少,所以只是随便设置一个数。再设置回来即可。

[root@node1 ~]# ceph osd pool set cephfs_metadata2 pg_num 256
Error E2BIG: specified pg_num 256 is too large (creating 248 new PGs on ~3 OSDs exceeds per-OSD max of 32)

结果出现这个错误,参考“http://www.selinuxplus.com/?p=782”,原来是一次增加的数量有限制。最后选择用暴力的方法解决问题:

[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 40
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 72
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 104
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 136
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 168
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 200
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 232
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 256

[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 40
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 72
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 104
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 136
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 168
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 200
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 232
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 256

过了大概半个小时,集群就正常了。

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM