
配置信息:
使用 2個萬兆網卡的RGW主機,並用OSPF做高可用和負載均衡。 Ceph OSD集群 有21Node (萬兆網卡+ 12個4T SATA機械硬盤)
測試VM配置:
使用在使用VXLAN協議構建的VPC網絡內的8個4核8G的VM作為cosbench driver。 使用 128個cosbench work同時執行測試,文件塊大小(4M-10M)。
測試場景和結果:
ratio throughput | ratio throughput | ratio throughput | ratio throughput | |||||
read | 80% 2.04GB/s | 20% 268.98 MB/s | 99% 2.34GB/s | 1% 8.8MB/s | ||||
write | 20% 511.94MB/s | 80% 919.69MB/s | 1% 20.62MB/s | 99% 967.05MB/s |
結論: RGW讀能跑滿2個RGW主機的萬兆網卡,RGW寫1GB左右,達到RGW集群順序寫上限(增加RGW主機或在主機上部署多個RGW進程或許對提升寫有幫助)。
s3workload.xml 配置文件:
<?xml version="1.0" encoding="UTF-8" ?> <workload name="s3-sample" description="sample benchmark for s3"> <storage type="s3" config="accesskey=ak;secretkey=sk;proxyhost=;proxyport=;endpoint=http://rgw-host-ip" /> <workflow> <workstage name="init"> <work type="init" workers="1" config="cprefix=s3testqwer;containers=r(1,2)" /> </workstage> <workstage name="prepare"> <work type="prepare" workers="1" config="cprefix=s3testqwer;containers=r(1,2);objects=r(1,10);sizes=u(4,10)MB" /> </workstage> <workstage name="main"> <work name="main" workers="128" runtime="120"> <operation type="read" ratio="99" config="cprefix=s3testqwer;containers=u(1,2);objects=u(1,10)"/> <operation type="write" ratio="1" config="cprefix=s3testqwer;containers=u(1,2);objects=u(11,32);sizes=u(4,10)MB" /> </work> </workstage> <workstage name="cleanup"> <work type="cleanup" workers="1" config="cprefix=s3testqwer;containers=r(1,2);objects=r(1,32)" /> </workstage> <workstage name="dispose"> <work type="dispose" workers="1" config="cprefix=s3testqwer;containers=r(1,2)" /> </workstage> </workflow> </workload>
Cosbench測試截圖: