【kafka學習之四】kafka集群性能測試


kafka集群的性能受限於JVM參數、服務器的硬件配置以及kafka的配置,因此需要對所要部署kafka的機器進行性能測試,根據測試結果,找出符合業務需求的最佳配置。

1、kafka broker jVM參數
kafka broker jVM 是由腳本kafka-server-start.sh中參數KAFKA_HEAP_OPTS來控制的,如果不設置,默認是1G
可以在首行添加KAFKA_HEAP_OPTS配置,注意如果要使用G1垃圾回收器,堆內存最小4G,jdk至少jdk7u51以上
舉例:
export KAFKA_HEAP_OPTS="-Xmx4G -Xms4G -Xmn2G -XX:PermSize=64m -XX:MaxPermSize=128m -XX:SurvivorRatio=6 -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"

2、kafka集群性能測試工具 (基於kafka_2.11-0.11.0.0)
kafka自帶的測試工具:針對生產者的kafka-producer-perf-test.sh和針對消費者的kafka-consumer-perf-test.sh
2.1 kafka-producer-perf-test.sh
參數說明:
--help 顯示幫助
--topic  topic名稱
--record-size       每條消息的字節數
--throughput        消息吞吐量,每秒鍾發送的消息數
--producer-props    生產者相關的配置屬性, 如bootstrap.servers,client.id等,這些配置優先於--producer.config
--producer.config 生產者配置文件,producer.properties
--print-metrics 在測試結束時打印出度量值。(默認值: false)
--num-records       總共需要發送的消息數量
--payload-file 要發送消息所在的文件名,文件里是要發送消息數據,與num-records兩者至少選一個
--payload-delimiter payload-file中消息分隔符
--transactional-id 用於測試並發事務的性能 (默認值:performance-producer-default-transactional-id)
--transaction-duration-ms 事務最大值,當超過這個時間 就會提交事務 (默認值: 0)

舉例:
./bin/kafka-producer-perf-test.sh --topic test-pati3-rep2 --throughput 500000 --num-records 1500000 --record-size 1000 --producer.config config/producer.properties --producer-props bootstrap.servers=10.1.8.16:9092,10.1.8.15:9092,10.1.8.14:9092 acks=1

測試維度:可以調整JVM、分區數、副本數、throughput--吞吐量、record-size--消息大小、acks--副本響應模式、compression-codec--壓縮方式

[cluster@PCS101 bin]$ ./kafka-producer-perf-test.sh --topic REC-CBBO-MSG-TOPIC --throughput 50000 --num-records 150000 --record-size 102400 --producer-props bootstrap.servers=134.32.123.101:9092,134.32.123.102:9092,134.32.123.103:9092 acks=all --print-metrics
12786 records sent, 2556.7 records/sec (249.68 MB/sec), 122.1 ms avg latency, 231.0 max latency.
14827 records sent, 2965.4 records/sec (289.59 MB/sec), 109.4 ms avg latency, 291.0 max latency.
14587 records sent, 2917.4 records/sec (284.90 MB/sec), 111.6 ms avg latency, 374.0 max latency.
14292 records sent, 2858.4 records/sec (279.14 MB/sec), 114.8 ms avg latency, 389.0 max latency.
14557 records sent, 2910.8 records/sec (284.26 MB/sec), 112.3 ms avg latency, 354.0 max latency.
14524 records sent, 2904.2 records/sec (283.62 MB/sec), 113.1 ms avg latency, 362.0 max latency.
14686 records sent, 2937.2 records/sec (286.84 MB/sec), 111.4 ms avg latency, 348.0 max latency.
14637 records sent, 2927.4 records/sec (285.88 MB/sec), 111.8 ms avg latency, 378.0 max latency.
15186 records sent, 3037.2 records/sec (296.60 MB/sec), 107.9 ms avg latency, 343.0 max latency.
14584 records sent, 2916.2 records/sec (284.79 MB/sec), 112.4 ms avg latency, 356.0 max latency.
150000 records sent, 2888.170055 records/sec (282.05 MB/sec), 112.78 ms avg latency, 389.00 ms max latency, 11 ms 50th, 321 ms 95th, 340 ms 99th, 375 ms 99.9th.

最后一條記錄是個總體統計:發送的總記錄數,平均的TPS(每秒處理的消息數),平均延遲,最大延遲, 然后我們將發送記錄數最小的那一行作為生產者瓶頸(紅色記錄)

如果加上--print-metrics  最后會打印metrics統計信息:

 

Metric Name                                                                                 Value
kafka-metrics-count:count:{client-id=producer-1}                                          : 84.000
producer-metrics:batch-size-avg:{client-id=producer-1}                                    : 102472.000
producer-metrics:batch-size-max:{client-id=producer-1}                                    : 102472.000
producer-metrics:batch-split-rate:{client-id=producer-1}                                  : 0.000
producer-metrics:buffer-available-bytes:{client-id=producer-1}                            : 33554432.000
producer-metrics:buffer-exhausted-rate:{client-id=producer-1}                             : 0.000
producer-metrics:buffer-total-bytes:{client-id=producer-1}                                : 33554432.000
producer-metrics:bufferpool-wait-ratio:{client-id=producer-1}                             : 0.857
producer-metrics:compression-rate-avg:{client-id=producer-1}                              : 1.000
producer-metrics:connection-close-rate:{client-id=producer-1}                             : 0.000
producer-metrics:connection-count:{client-id=producer-1}                                  : 5.000
producer-metrics:connection-creation-rate:{client-id=producer-1}                          : 0.091
producer-metrics:incoming-byte-rate:{client-id=producer-1}                                : 87902.611
producer-metrics:io-ratio:{client-id=producer-1}                                          : 0.138
producer-metrics:io-time-ns-avg:{client-id=producer-1}                                    : 69622.263
producer-metrics:io-wait-ratio:{client-id=producer-1}                                     : 0.329
producer-metrics:io-wait-time-ns-avg:{client-id=producer-1}                               : 166147.404
producer-metrics:metadata-age:{client-id=producer-1}                                      : 55.104
producer-metrics:network-io-rate:{client-id=producer-1}                                   : 1557.405
producer-metrics:outgoing-byte-rate:{client-id=producer-1}                                : 278762290.882
producer-metrics:produce-throttle-time-avg:{client-id=producer-1}                         : 0.000
producer-metrics:produce-throttle-time-max:{client-id=producer-1}                         : 0.000
producer-metrics:record-error-rate:{client-id=producer-1}                                 : 0.000
producer-metrics:record-queue-time-avg:{client-id=producer-1}                             : 110.963
producer-metrics:record-queue-time-max:{client-id=producer-1}                             : 391.000
producer-metrics:record-retry-rate:{client-id=producer-1}                                 : 0.000
producer-metrics:record-send-rate:{client-id=producer-1}                                  : 2724.499
producer-metrics:record-size-avg:{client-id=producer-1}                                   : 102487.000
producer-metrics:record-size-max:{client-id=producer-1}                                   : 102487.000
producer-metrics:records-per-request-avg:{client-id=producer-1}                           : 3.493
producer-metrics:request-latency-avg:{client-id=producer-1}                               : 7.011
producer-metrics:request-latency-max:{client-id=producer-1}                               : 56.000
producer-metrics:request-rate:{client-id=producer-1}                                      : 778.702
producer-metrics:request-size-avg:{client-id=producer-1}                                  : 357989.537
producer-metrics:request-size-max:{client-id=producer-1}                                  : 614940.000
producer-metrics:requests-in-flight:{client-id=producer-1}                                : 0.000
producer-metrics:response-rate:{client-id=producer-1}                                     : 778.731
producer-metrics:select-rate:{client-id=producer-1}                                       : 1979.326
producer-metrics:waiting-threads:{client-id=producer-1}                                   : 0.000
producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node--1}          : 19.601
producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node--2}          : 3.956
producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node-0}           : 31220.396
producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node-1}           : 29885.883
producer-node-metrics:incoming-byte-rate:{client-id=producer-1, node-id=node-2}           : 26920.163
producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node--1}          : 1.324
producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node--2}          : 0.436
producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node-0}           : 98518580.943
producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node-1}           : 82114190.903
producer-node-metrics:outgoing-byte-rate:{client-id=producer-1, node-id=node-2}           : 98518948.091
producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node--1}         : 0.000
producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node--2}         : 0.000
producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node-0}          : 6.891
producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node-1}          : 5.135
producer-node-metrics:request-latency-avg:{client-id=producer-1, node-id=node-2}          : 11.202
producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node--1}         : -Infinity
producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node--2}         : -Infinity
producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node-0}          : 56.000
producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node-1}          : 46.000
producer-node-metrics:request-latency-max:{client-id=producer-1, node-id=node-2}          : 55.000
producer-node-metrics:request-rate:{client-id=producer-1, node-id=node--1}                : 0.036
producer-node-metrics:request-rate:{client-id=producer-1, node-id=node--2}                : 0.018
producer-node-metrics:request-rate:{client-id=producer-1, node-id=node-0}                 : 279.365
producer-node-metrics:request-rate:{client-id=producer-1, node-id=node-1}                 : 340.136
producer-node-metrics:request-rate:{client-id=producer-1, node-id=node-2}                 : 160.233
producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node--1}            : 36.500
producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node--2}            : 24.000
producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node-0}             : 352658.869
producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node-1}             : 241415.634
producer-node-metrics:request-size-avg:{client-id=producer-1, node-id=node-2}             : 614858.709
producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node--1}            : 49.000
producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node--2}            : 24.000
producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node-0}             : 614940.000
producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node-1}             : 512460.000
producer-node-metrics:request-size-max:{client-id=producer-1, node-id=node-2}             : 614940.000
producer-node-metrics:response-rate:{client-id=producer-1, node-id=node--1}               : 0.036
producer-node-metrics:response-rate:{client-id=producer-1, node-id=node--2}               : 0.018
producer-node-metrics:response-rate:{client-id=producer-1, node-id=node-0}                : 279.486
producer-node-metrics:response-rate:{client-id=producer-1, node-id=node-1}                : 340.284
producer-node-metrics:response-rate:{client-id=producer-1, node-id=node-2}                : 160.233
producer-topic-metrics:byte-rate:{client-id=producer-1, topic=REC-CBBO-MSG-TOPIC}         : 279184829.991
producer-topic-metrics:compression-rate:{client-id=producer-1, topic=REC-CBBO-MSG-TOPIC}  : 1.000
producer-topic-metrics:record-error-rate:{client-id=producer-1, topic=REC-CBBO-MSG-TOPIC} : 0.000
producer-topic-metrics:record-retry-rate:{client-id=producer-1, topic=REC-CBBO-MSG-TOPIC} : 0.000
producer-topic-metrics:record-send-rate:{client-id=producer-1, topic=REC-CBBO-MSG-TOPIC}  : 2724.548

 

2.2 kafka-consumer-perf-test.sh
參數說明:
--help 顯示幫助
--batch-size 在單個批處理中寫入的消息數。(默認值: 200)
--broker-list 使用新的消費者是必需的,如果使用老的消費者就不是必需的
--supported codec 壓縮方式 NoCompressionCodec 為 0(默認0不壓縮), GZIPCompressionCodec 為 1, SnappyCompressionCodec 為 2, LZ4CompressionCodec 為3
--consumer.config 指定消費者配置文件 consumer.properties
--date-format 用於格式化時間字段的格式化字符串 (默認: yyyy-MM-dd HH:mm:ss:SSS)
--fetch-size 單個消費請求獲取的數據字節量(默認: 1048576 (1M))
--from-latest 如果消費者還沒有已建立的偏移量, 就從日志中的最新消息開始, 而不是最早的消息。
--group 消費者組id (默認值: perf-consumer-29512)
--hide-header 跳過打印數據頭的統計信息
--message-size 每條消息大小(默認: 100字節)
--messages 必需,要獲取的消息總數量
--new-consumer 使用新的消費者 這是默認值
--num-fetch-threads 獲取消息的線程數 (默認: 1)
--print-metrics 打印出指標。這只適用於新的消費者。
--reporting-interval 打印報告信息的間隔 (以毫秒為單位,默認值: 5000)
--show-detailed-stats 根據報告間隔配置的每個報告間隔報告統計信息。
--socket-buffer-size TCP 獲取信息的緩存大小(默認: 2097152 (2M))
--threads 處理線程數 (默認: 10)
--topic 必需 主題名稱
--zookeeper zk清單,當使用老的消費者時必需

測試維度:調整以上參數值

[cluster@PCS101 bin]$ ./kafka-consumer-perf-test.sh --topic REC-CBBO-MSG-TOPIC --messages 500000 --message-size 102400 --batch-size 50000 --fetch-size 1048576 --num-fetch-threads 17 --threads 10 --zookeeper 134.32.123.101:2181,134.32.123.102:2181,134.32.123.103:2181 --print-metrics
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
2018-10-01 09:50:43:707, 2018-10-01 09:51:40:553, 84487.7018, 1486.2559, 874167, 15377.8102

消費者瓶頸:1486.2559 MB.sec,15377nMsg.sec

 

3、可視化性能分析工具-Kafka Manager(Yammer Metrics)

 --后續更新


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM