ceph 指定OSD創建pool


https://my.oschina.net/wangzilong/blog/1549690

 

ceph集群中允許使用混合類型的磁盤,比如一部分磁盤是SSD,一部分是STAT。如果針對某些業務小高速磁盤SSD,某些業務需要STAT,在創建資源池的時候可以指定創建在某些OSD上。

    基本步驟有8步:

        當前只有STAT沒有SSD,但是不影響實驗結果。

1    獲取crush map

[root@ceph-admin getcrushmap]# ceph osd getcrushmap -o /opt/getcrushmap/crushmap got crush map from osdmap epoch 2482

2    反編譯crush map

[root@ceph-admin getcrushmap]# crushtool -d crushmap -o decrushmap

3    修改crush map

    在root default 后面添加下面兩個bucket

root ssd {
	id -5
	alg straw
	hash 0 item osd.0 weight 0.01 } root stat { id -6 alg straw hash 0 item osd.1 weight 0.01 } 

    在rules部分添加如下規則:

rule ssd{
	ruleset 1
	type replicated min_size 1 max_size 10 step take ssd step chooseleaf firstn 0 type osd step emit } rule stat{ ruleset 2 type replicated min_size 1 max_size 10 step take stat step chooseleaf firstn 0 type osd step emit } 

4    編譯crush map

[root@ceph-admin getcrushmap]# crushtool -c decrushmap -o newcrushmap 

5    注入crush map

[root@ceph-admin getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap set crush map 
[root@ceph-admin getcrushmap]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-6 0.00999 root stat 1 0.00999 osd.1 up 1.00000 1.00000 -5 0.00999 root ssd 0 0.00999 osd.0 up 1.00000 1.00000 -1 0.58498 root default -2 0.19499 host ceph-admin 2 0.19499 osd.2 up 1.00000 1.00000 -3 0.19499 host ceph-node1 0 0.19499 osd.0 up 1.00000 1.00000 -4 0.19499 host ceph-node2 1 0.19499 osd.1 up 1.00000 1.00000 # 重新查看osd tree 的時候已經看見這個樹已經變了。添加了名稱為stat和SSD的兩個bucket

6    創建資源池

[root@ceph-admin getcrushmap]# ceph osd pool create ssd_pool 8 8 pool 'ssd_pool' created [root@ceph-admin getcrushmap]# ceph osd pool create stat_pool 8 8 pool 'stat_pool' created [root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0 

注意:剛剛創建的兩個資源池ssd_pool 和stat_pool 的 crush_ruleset  都是0,下面需要修改。

7    修改資源池存儲規則

[root@ceph-admin getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1 set pool 28 crush_ruleset to 1 [root@ceph-admin getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2 set pool 29 crush_ruleset to 2 [root@ceph-admin getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width 0 [root@ceph-admin getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0 # luminus 版本設置pool規則的語法是 [root@ceph-admin ceph]# ceph osd pool set ssd crush_rule ssd set pool 2 crush_rule to ssd [root@ceph-admin ceph]# ceph osd pool set stat crush_rule stat set pool 1 crush_rule to stat 

8    驗證

    驗證前先看看ssd_pool 和stat_pool 里面是否有對象

[root@ceph-admin getcrushmap]# rados ls -p ssd_pool [root@ceph-admin getcrushmap]# rados ls -p stat_pool #這兩個資源池中都沒有對象

    用rados命令 添加對象到兩個資源池中

[root@ceph-admin getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts [root@ceph-admin getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts [root@ceph-admin getcrushmap]# rados ls -p ssd_pool test_object1 [root@ceph-admin getcrushmap]# rados ls -p stat_pool test_object2 #對象添加成功
[root@ceph-admin getcrushmap]# ceph osd map ssd_pool test_object1 osdmap e2493 pool 'ssd_pool' (28) object 'test_object1' -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0) [root@ceph-admin getcrushmap]# ceph osd map stat_pool test_object2 osdmap e2493 pool 'stat_pool' (29) object 'test_object2' -> pg 29.c5cfe5e9 (29.1) -> up ([1], p1) acting ([1,0,2], p1) 

上面驗證結果可以看出,test_object1 存入osd.0中,test_object2 存入osd.1中。達到預期目的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM