1 背景
由於股票撮合中,我們使用zset構建到價成交,故這里對rangebyscore命令進行原位壓力測試
redis線程池如何定,為什么開10個disruptor消費線程(redis連接):
1)io密集型4核2(n+1);
2)從第2點本地壓測結果看,10線程已80%滿足最高qps;
3)disruptor太多線程不好
2 首先壓一波本地
壓測設備:mac 2016 12'
2.1 docker
redis-benchmark -h 127.0.0.1 -p 63790 -c 100 -n 10000 script load "redis.call('zrangebyscore','sh111111','3','9')"
java benchmark | java 代碼 | redis命令行 | |
1 | 807 | 729 | 866 |
10 | 3115 | 3115 | 3187 |
50 | 4467 | 4235 | 4640 |
100 | 4375 | 4417 | 5238 |
500 | 5747 |
*java benchmark與java代碼都存在從池拿連接的操作
2.2 native
redis-benchmark -h 127.0.0.1 -p 6379 -c 1 -n 10000 script load "redis.call('zrangebyscore','sh111111','3','9')"
java benchmark | java 代碼 | redis命令行 | |
1 | 11729 | 6050 | 10131 |
10 | 28597 | 18653 | 21000 |
50 | 31943 | 29056 | 23584 |
100 | 29476 | 28438 | 24875 |
500 | 24937 |
那么我們看到局域網及docker的測試,可能經過網卡后,10線程qps為3k,這個值與官方宣稱的10w相去甚遠,所以我看下往上其它人的壓測結果
3 其它參考:
3.1 openresty-redis在不同網絡環境下QPS對比講解
http://blog.sina.cn/dpool/blog/s/blog_6145ed810102vefe.html?from=groupmessage&isappinstalled=0
redis相對openresty網絡環境redis(requests per second)openresty(requests per second)
本地52631.58
局域網3105.59 與我docker測試水平相當
公網(紐約節點)169.95
3.2 memcache、redis、tair性能對比測試報告
http://blog.sina.cn/dpool/blog/s/blog_6145ed810102vefe.html?from=groupmessage&isappinstalled=0
以單線程通過各緩存客戶端get調用向服務端獲取數據,比較10000操作所消耗的時間
redis 1k對象 1260qps
並發1000個線程通過緩存軟件的客戶get調用向服務端獲取數據,每個線程完成10000次的操作
redis 1k對象 11430qps 這個數據比我測試的要大三倍
4 阿里雲redis qps 10線程,4.7萬qps
https://zhuanlan.zhihu.com/p/78034665?utm_source=wechat_session&utm_medium=social&utm_oi=1003056052560101376&from=singlemessage&isappinstalled=0&wechatShare=1&s_s_i=Nxnfuuur16PoKq5S8w%2Bv7CqmqZ5fwF2fxQZXH9O4%2FPM%3D&s_r=1
阿里雲社區版
社區 標准版雙副本 1g主從 redis5.0 號稱8w qps(集群256分片2560w qps),企業版24w(集群6144w):https://help.aliyun.com/document_detail/26350.html
施壓機 :4 vCPU 8 GiB (I/O優化)ecs.c6.xlarge 10Mbps (峰值)
5 后話,為什么redis 多線程客戶端獲得更大qps,大到什么程度
以一個例子說明,假設:
一次命令時間(borrow|return resource + Jedis執行命令(含網絡) )的平均耗時約為1ms,一個連接的QPS大約是1000 業務期望的QPS是50000 那么理論上需要的資源池大小是50000 / 1000 = 50個。但事實上這是個理論值,還要考慮到要比理論值預留一些資源,通常來講maxTotal可以比理論值大一些。
但這個值不是越大越好,一方面連接太多占用客戶端和服務端資源,另一方面對於Redis這種高QPS的服務器,一個大命令的阻塞即使設置再大資源池仍然會無濟於事。
https://cloud.tencent.com/developer/article/1425158
注意,redis多線程qps並不像理論的那樣,多個線程qps=單個線程*線程數(有點像負載均衡),因為線程之間相互切換吞吐量相互制約,成非線性關系
6 性能監控:
參考1 https://www.cnblogs.com/cheyunhua/p/9068029.html 設置redis最大內存,類似於java內存的xmx
參考2 https://blog.csdn.net/z644041867/article/details/77965521 性能監控指標
redis-cli info | grep -w "connected_clients" |awk -F':' '{print $2}'
redis-cli info | grep -w "used_memory_rss_human" |awk -F':' '{print $2}' 類似於java內存jmx監控的commited和used
redis-cli info | grep -w "used_memory_peak_human" |awk -F':' '{print $2}'
redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}'
redis-benchmark -h 127.0.0.1 -p 6379 -c 1 -n 1000000 script load "redis.call('zrangebyscore','sh111111','3','9')"
^Cript load redis.call('zrangebyscore','sh111111','3','9'): 10026.05
JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 0 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 9666 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 9473 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10249 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10590 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10486 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10421 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10450 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10673 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10707 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10655 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10530 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10570 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 10396 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 9595 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 9010 JoycedeMacBook:redis-5.0.5 joyce$ redis-cli info | grep -w "instantaneous_ops_per_sec" |awk -F':' '{print $2}' 9414
7 測試代碼:
package redis; import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.JSONObject; import com.google.protobuf.InvalidProtocolBufferException; import ip.IpPool; import org.apache.commons.pool2.impl.GenericObjectPool; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.runner.Runner; import org.openjdk.jmh.runner.RunnerException; import org.openjdk.jmh.runner.options.Options; import org.openjdk.jmh.runner.options.OptionsBuilder; import org.redisson.Redisson; import org.redisson.api.RBucket; import org.redisson.api.RedissonClient; import org.redisson.config.Config; import redis.clients.jedis.Jedis; import redis.clients.jedis.JedisPool; import redis.clients.jedis.JedisPoolConfig; import serial.MyBaseBean; import serial.MyBaseProto; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; /** * Created by joyce on 2019/10/24. */ @BenchmarkMode(Mode.Throughput)//基准測試類型 @OutputTimeUnit(TimeUnit.SECONDS)//基准測試結果的時間類型 @Threads(10)//測試線程數量(IO密集) @State(Scope.Thread)//該狀態為每個線程獨享 public class YaliRedis { private static JedisPool jedisPool; private static final int threadCount = 10; @Setup public static void init() { JedisPoolConfig config = new JedisPoolConfig(); config.setMaxTotal(800); config.setMaxIdle(800); jedisPool = new JedisPool(config,"localhost",63790,2000,"test"); // Jedis jedis = jedisPool.getResource(); // String test = jedis.get("testkey"); // System.out.println(test); // Set<String> set = jedis.zrangeByScore("sh111111", 3,9); // System.out.println(set.size()); // jedis.close(); } @TearDown public static void destroy() { jedisPool.close(); } public static void main(String[] args) throws Exception { // initData(); if(false) { Options opt = new OptionsBuilder().include(YaliRedis.class.getSimpleName()).forks(1).warmupIterations(1) .measurementIterations(3).build(); new Runner(opt).run(); } else { init(); final int perThread = 10000; CountDownLatch countDownLatchMain = new CountDownLatch(threadCount); CountDownLatch countDownLatchSub = new CountDownLatch(1); for(int i=0; i<threadCount; ++i) { new Thread(new Runnable() { @Override public void run() { try { countDownLatchSub.await(); Set<String> set = null; for(int j=0; j<perThread; ++j) set = testZSet(); System.out.println(set.size()); } catch (Exception e) { e.printStackTrace(); } finally { countDownLatchMain.countDown(); } } }).start(); } long st = (System.currentTimeMillis()); countDownLatchSub.countDown(); countDownLatchMain.await(); System.out.println(System.currentTimeMillis() - st); System.out.println(threadCount * perThread * 1000 / (System.currentTimeMillis() - st)); } } @Benchmark public static Set<String> testZSet() { Jedis jedis = null; jedis = jedisPool.getResource(); Set<String> set = jedis.zrangeByScore("sh111111", 3,9); jedis.close(); return set; } // @Benchmark public static void test() { Jedis jedis = null; jedis = jedisPool.getResource(); jedis.get("testkey"); jedis.close(); } // @Benchmark public static void testJson() { Jedis jedis = null; jedis = jedisPool.getResource(); String xx = jedis.get("testjson"); JSONObject userJson = JSONObject.parseObject(xx); MyBaseBean user = JSON.toJavaObject(userJson,MyBaseBean.class); jedis.close(); } // @Benchmark public static void testPb() { Jedis jedis = null; jedis = jedisPool.getResource(); byte [] bytes = jedis.get("testpb".getBytes()); try { MyBaseProto.BaseProto baseProto = MyBaseProto.BaseProto.parseFrom(bytes); } catch (InvalidProtocolBufferException e) { e.printStackTrace(); } jedis.close(); } public static void initData() { Jedis jedis = new Jedis("localhost", 63790); jedis.auth("test"); for(int i=1; i<=9; ++i) { jedis.zadd("sh111111", i, String.valueOf(i*100)); } } }
8 備用:
1 redis-benchmark. + ( java bench jedis ) 1)redis 本機 redis-benchmark -h 127.0.0.1 -p 6379 -c 1000 -n 10000 script load "redis.call('zrangebyscore','sh111111','3','9)" 1 th 10000 (11500) 50 th 24000 (33000) 100 th 25000 (30000) 500 th 26000 (20000) 1000 th 24000 2)docker redis-benchmark -h 127.0.0.1 -p 63790 -c 100 -n 10000 script load "redis.call('zrangebyscore','sh111111','3','9)" 1 th 640 (700) 50 th 3900 (3300) 100 th 4400 (3800) 500 th 6000 (4500). 約80% 1000 th 5300