Kafka壓力測試(自帶測試腳本)(單機版)


一、測試目的

        本次性能測試在正式環境下單台服務器上Kafka處理MQ消息能力進行壓力測試。測試包括對Kafka寫入MQ消息和消費MQ消息進行壓力測試,根據10w、100w和1000w級別的消息處理結果,評估Kafka的處理性能是否滿足項目需求。(該項目期望Kafka能夠處理上億級別的MQ消息)

二、測試范圍及方法

2.1 測試范圍概述

   測試使用Kafka自帶的測試腳本,通過命令對Kafka發起寫入MQ消息和Kafka消費MQ消息的請求。模擬不同數量級的MQ消息寫入和MQ消息消費場景,根據Kafka的處理結果,評估Kafka是否滿足處理億級以上的消息的能力。

2.2性能測試場景設計

2.2.1 Kafka寫入消息壓力測試

測試場景 MQ消息數 每秒寫入消息數 記錄大小(單位:字節)
Kafka消息寫入測試 10W 2000條 1000
10W 5000條 1000
100W 5000條 1000

 

 

 

 

 

2.2.2 Kafka消費消息壓力測試

測試場景 消費MQ消息數
Kafka消息消費測試 10W
100W
1000W

 

 

 

 

 

2.3測試方法簡要描述

2.3.1測試目的

     驗證帶台服務器上Kafka寫入消息和消費消息的能力,根據測試結果評估當前Kafka集群模式是否滿足上億級別的消息處理能力。

2.3.2測試方法

     在服務器上使用Kafka自帶的測試腳本,分別模擬10w、100w和1000w的消息寫入請求,查看Kafka處理不同數量級的消息數時的處理能力,包括每秒生成消息數、吞吐量、消息延遲時間。Kafka消息吸入創建的topic命名為test_perf,使用命令發起消費該topic的請求,查看Kafka消費不同數量級別的消息時的處理能力。

壓測命令信息:

測試項 壓測消息數(單位:W)  測試命令
寫入MQ消息 10 

./kafka-producer-perf-test.sh --topic test_perf --num-records 100000 --record-size 1000  --throughput 2000 --producer-props bootstrap.servers=10.150.30.60:9092

  100 

./kafka-producer-perf-test.sh --topic test_perf --num-records 1000000 --record-size 2000  --throughput 5000 --producer-props bootstrap.servers=10.150.30.60:9092

  1000

./kafka-producer-perf-test.sh --topic test_perf --num-records 10000000 --record-size 2000  --throughput 5000 --producer-props bootstrap.servers=10.150.30.60:9092

消費MQ消息 10 

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 100000 --threads 1

  100 

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 1000000 --threads 1

  1000 

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 10000000 --threads 1

 

 

 

 

 

 

 

 

 

腳本執行目錄:服務器上安裝Kafka的bin目錄;

 

三、測試環境

3.1 測試環境機器配置表

主 機 數量 資 源 操作系統
MQ消息服務/處理 1

硬件:1(核)-4(G)-40(G)
軟件:Kafka單機(kafka_2.12-2.1.0)

ubuntu-16.04.5-server-amd64

 

 

 

 

3.2 測試工具

Kafka壓測工具 Kafka自帶壓測腳本

 

 

3.3 測試環境搭建

這里僅僅使用單機版的kakfa,為了快速搭建,使用自帶的zk。

新建目錄

mkdir /opt/kafka_server_test

 

dockerfile

FROM ubuntu:16.04
# 修改更新源為阿里雲
ADD sources.list /etc/apt/sources.list
ADD kafka_2.12-2.1.0.tgz /
# 安裝jdk
RUN apt-get update && apt-get install -y openjdk-8-jdk --allow-unauthenticated && apt-get clean all

EXPOSE 9092
# 添加啟動腳本
ADD run.sh .
RUN chmod 755 run.sh
ENTRYPOINT [ "/run.sh"]

 

run.sh

#!/bin/bash

# 啟動自帶的zookeeper
cd /kafka_2.12-2.1.0
bin/zookeeper-server-start.sh config/zookeeper.properties &

# 啟動kafka
sleep 3
bin/kafka-server-start.sh config/server.properties

 

sources.list

deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu xenial-security main restricted
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb http://mirrors.aliyun.com/ubuntu xenial-security multiverse

 

目錄結構如下:

./
├── dockerfile
├── kafka_2.12-2.1.0.tgz
├── run.sh
└── sources.list

 

生成鏡像

docker build -t kafka_server_test /opt/kafka_server_test

 

啟動kafka

docker run -d -it kafka_server_test

 

四、測試結果

4.1測試結果說明

本次測試針對Kafka消息處理的能力 進行壓力測試,對Kafka集群服務器中的一台進行MQ消息服務的壓力測試,關注Kafka消息寫入的延遲時間是否滿足需求。對Kafka集群服務器中的一台進行MQ消息處理的壓力測試,驗證Kafka的消息處理能力。

4.2.1寫入MQ消息

測試項 設置消息總數
(單位:w)
設置單個消息大小
(單位:字節)
設置每秒發送消息數 實際寫入消息數/秒 95%的消息延遲
(單位:ms)
寫入MQ消息   10 1000 2000 1999.84 1
100 1000 5000 4999.84 1
1000 1000 5000 4999.99 1

 

 

 

 

壓測結果

在上面已經啟動了kafka容器,查看進程

root@ubuntu:/opt# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                            NAMES
5ced2eb77349        kafka_server_test        "/run.sh"           34 minutes ago      Up 34 minutes       0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp   youthful_bhaskara

 

進入kafka的bin目錄

root@ubuntu:/opt# docker exec -it 5ced2eb77349 /bin/bash
root@5ced2eb77349:/# cd /kafka_2.12-2.1.0/ root@5ced2eb77349:/kafka_2.12-2.1.0# cd bin/

 

1. 寫入10w消息壓測結果

執行命令

./kafka-producer-perf-test.sh --topic test_perf --num-records 100000 --record-size 1000  --throughput 2000 --producer-props bootstrap.servers=localhost:9092

輸出:

records sent, 1202.4 records/sec (1.15 MB/sec), 1678.8 ms avg latency, 2080.0 max latency.
records sent, 2771.8 records/sec (2.64 MB/sec), 1300.4 ms avg latency, 2344.0 max latency.
records sent, 2061.6 records/sec (1.97 MB/sec), 17.1 ms avg latency, 188.0 max latency.
records sent, 1976.6 records/sec (1.89 MB/sec), 10.0 ms avg latency, 177.0 max latency.
records sent, 2025.2 records/sec (1.93 MB/sec), 15.4 ms avg latency, 253.0 max latency.
records sent, 2000.8 records/sec (1.91 MB/sec), 6.1 ms avg latency, 163.0 max latency.
records sent, 1929.7 records/sec (1.84 MB/sec), 3.7 ms avg latency, 128.0 max latency.
records sent, 2072.0 records/sec (1.98 MB/sec), 14.1 ms avg latency, 163.0 max latency.
records sent, 2001.6 records/sec (1.91 MB/sec), 4.5 ms avg latency, 116.0 max latency.
records sent, 1997.602877 records/sec (1.91 MB/sec), 290.41 ms avg latency, 2344.00 ms max latency, 2 ms 50th, 1992 ms 95th, 2177 ms 99th, 2292 ms 99.9th.
View Code

 

2. 寫入100w消息壓測結果

執行命令

./kafka-producer-perf-test.sh --topic test_perf --num-records 1000000 --record-size 1000  --throughput 5000 --producer-props bootstrap.servers=localhost:9092

輸出:

records sent, 2158.5 records/sec (2.06 MB/sec), 2134.9 ms avg latency, 2869.0 max latency.
records sent, 7868.4 records/sec (7.50 MB/sec), 1459.2 ms avg latency, 2815.0 max latency.
records sent, 4991.0 records/sec (4.76 MB/sec), 20.3 ms avg latency, 197.0 max latency.
records sent, 4972.3 records/sec (4.74 MB/sec), 61.8 ms avg latency, 395.0 max latency.
records sent, 4880.2 records/sec (4.65 MB/sec), 64.7 ms avg latency, 398.0 max latency.
records sent, 5085.9 records/sec (4.85 MB/sec), 17.7 ms avg latency, 180.0 max latency.
records sent, 5030.8 records/sec (4.80 MB/sec), 14.7 ms avg latency, 157.0 max latency.
records sent, 5056.0 records/sec (4.82 MB/sec), 1.4 ms avg latency, 58.0 max latency.
records sent, 5001.0 records/sec (4.77 MB/sec), 0.8 ms avg latency, 58.0 max latency.
records sent, 5002.0 records/sec (4.77 MB/sec), 0.6 ms avg latency, 25.0 max latency.
records sent, 5000.0 records/sec (4.77 MB/sec), 0.6 ms avg latency, 14.0 max latency.
records sent, 5002.0 records/sec (4.77 MB/sec), 0.6 ms avg latency, 19.0 max latency.
records sent, 5005.0 records/sec (4.77 MB/sec), 1.2 ms avg latency, 57.0 max latency.
records sent, 5003.0 records/sec (4.77 MB/sec), 1.3 ms avg latency, 55.0 max latency.
records sent, 5000.0 records/sec (4.77 MB/sec), 0.9 ms avg latency, 44.0 max latency.
records sent, 5003.0 records/sec (4.77 MB/sec), 0.6 ms avg latency, 49.0 max latency.
records sent, 4988.0 records/sec (4.76 MB/sec), 1.1 ms avg latency, 49.0 max latency.
records sent, 5014.0 records/sec (4.78 MB/sec), 0.8 ms avg latency, 44.0 max latency.
records sent, 5001.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 10.0 max latency.
records sent, 5009.8 records/sec (4.78 MB/sec), 0.5 ms avg latency, 25.0 max latency.
records sent, 5001.2 records/sec (4.77 MB/sec), 0.5 ms avg latency, 7.0 max latency.
records sent, 5002.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 49.0 max latency.
records sent, 5005.0 records/sec (4.77 MB/sec), 0.6 ms avg latency, 25.0 max latency.
records sent, 5006.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 14.0 max latency.
records sent, 5005.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 19.0 max latency.
records sent, 4976.1 records/sec (4.75 MB/sec), 0.6 ms avg latency, 14.0 max latency.
records sent, 5036.0 records/sec (4.80 MB/sec), 0.6 ms avg latency, 18.0 max latency.
records sent, 4999.8 records/sec (4.77 MB/sec), 0.5 ms avg latency, 14.0 max latency.
records sent, 4980.2 records/sec (4.75 MB/sec), 0.5 ms avg latency, 14.0 max latency.
records sent, 5026.0 records/sec (4.79 MB/sec), 0.5 ms avg latency, 14.0 max latency.
records sent, 5003.0 records/sec (4.77 MB/sec), 0.4 ms avg latency, 10.0 max latency.
records sent, 5000.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 16.0 max latency.
records sent, 5007.0 records/sec (4.78 MB/sec), 0.5 ms avg latency, 42.0 max latency.
records sent, 5001.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 24.0 max latency.
records sent, 5002.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 14.0 max latency.
records sent, 5009.0 records/sec (4.78 MB/sec), 0.5 ms avg latency, 10.0 max latency.
records sent, 5006.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 18.0 max latency.
records sent, 5001.0 records/sec (4.77 MB/sec), 0.4 ms avg latency, 6.0 max latency.
records sent, 5000.0 records/sec (4.77 MB/sec), 128.2 ms avg latency, 955.0 max latency.
records sent, 4999.375078 records/sec (4.77 MB/sec), 88.83 ms avg latency, 2869.00 ms max latency, 1 ms 50th, 327 ms 95th, 2593 ms 99th, 2838 ms 99.9th.
View Code

 

3. 寫入1000w消息壓測結果

執行命令

./kafka-producer-perf-test.sh --topic test_perf --num-records 10000000 --record-size 1000  --throughput 5000 --producer-props bootstrap.servers=localhost:9092

輸出:

records sent, 1053.0 records/sec (1.00 MB/sec), 1952.7 ms avg latency, 3057.0 max latency.
records sent, 4173.8 records/sec (3.98 MB/sec), 4585.7 ms avg latency, 5256.0 max latency.
records sent, 9765.2 records/sec (9.31 MB/sec), 2621.9 ms avg latency, 4799.0 max latency.
...
records sent, 5000.8 records/sec (4.77 MB/sec), 0.6 ms avg latency, 79.0 max latency.
records sent, 4999.2 records/sec (4.77 MB/sec), 0.5 ms avg latency, 54.0 max latency.
records sent, 5003.0 records/sec (4.77 MB/sec), 0.5 ms avg latency, 19.0 max latency.
records sent, 4996.445029 records/sec (4.76 MB/sec), 310.11 ms avg latency, 22474.00 ms max latency, 1 ms 50th, 1237 ms 95th, 7188 ms 99th, 20824 ms 99.9th.
View Code

 

kafka-producer-perf-test.sh 腳本命令的參數解析(以100w寫入消息為例):
--topic topic名稱,本例為test_perf
--num-records 總共需要發送的消息數,本例為100000
--record-size 每個記錄的字節數,本例為1000
--throughput 每秒鍾發送的記錄數,本例為5000
--producer-props bootstrap.servers=localhost:9092 (發送端的配置信息,本次測試取集群服務器中的一台作為發送端,可在kafka的config目錄,以該項目為例:/usr/local/kafka/config;查看server.properties中配置的zookeeper.connect的值,默認端口:9092)

 

MQ消息寫入測試結果解析:

本例中寫入100w條MQ消息為例,每秒平均向kafka寫入了4.77MB的數據,大概是4999.375條消息/秒,每次寫入的平均延遲為88.83毫秒,最大的延遲為2869毫秒。

 

4.2.2消費MQ消息

消費MQ消息 消費消息總數
(單位:w)
共消費數據
(單位:M)
每秒消費數據
(單位:M)
每秒消費消息數 消費耗時
(單位:s)
消費MQ消息 10 95.36 137 143899.3 0.695
100 953.66 177.19 185804.5 5.38
1000 9536.73 198.25 207878.6 48.11

 

 

 

 

 

 

壓測結果

1. 消費10w消息壓測結果

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 100000 --threads 1

注意:此腳本沒有--zookeeper選項,參考鏈接有錯誤!

 

必須要執行寫入10w消息之后,才能執行上面的命令,否則運行時,會報下面的錯誤!

[2018-12-06 05:47:52,832] WARN [Consumer clientId=consumer-1, groupId=perf-consumer-19548] Error while fetching metadata with correlation id 18 : {test_perf=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
WARNING: Exiting before consuming the expected number of messages: timeout (10000 ms) exceeded. You can use the --timeout option to increase the timeout.

 

正常輸出:

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2018-12-06 05:50:41:276, 2018-12-06 05:50:45:281, 95.3674, 23.8121, 100000, 24968.7890, 78, 3927, 24.2851, 25464.7313

 

2. 消費100w消息壓測結果

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 1000000 --threads 1

輸出:

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2018-12-06 05:59:32:360, 2018-12-06 05:59:51:624, 954.0758, 49.5264, 1000421, 51932.1532, 41, 19223, 49.6320, 52042.9173

 

3. 消費1000w消息壓測結果

./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 10000000 --threads 1

輸出:

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2018-12-06 06:35:54:143, 2018-12-06 06:38:05:585, 9536.9539, 72.5564, 10000221, 76080.8646, 39, 131403, 72.5779, 76103.4451

 

kafka-consumer-perf-test.sh 腳本命令的參數為:
--broker-list 指定kafka的鏈接信息,本例為localhost:9092
--topic 指定topic的名稱,本例為test_perf,即4.2.1中寫入的消息;
--fetch-size 指定每次fetch的數據的大小,本例為1048576,也就是1M
--messages 總共要消費的消息個數,本例為1000000,100w


以本例中消費100w條MQ消息為例總共消費了954.07M的數據,每秒消費數據大小為49.52M,總共消費了1000421條消息,每秒消費51932.15條消息。

五、結果分析

一般寫入MQ消息設置5000條/秒時,消息延遲時間小於等於1ms,在可接受范圍內,說明消息寫入及時。

Kafka消費MQ消息時,1000W待處理消息的處理能力如果在每秒20w條以上,那么處理結果是理想的。


根據Kafka處理10w、100w和1000w級的消息時的處理能力,可以評估出Kafka集群服務,是否有能力處理上億級別的消息。


本次測試是在單台服務器上進行,基本不需要考慮網絡帶寬的影響。所以單台服務器的測試結果,對評估集群服務是否滿足上線后實際應用的需求,很有參考價值。

 

 

本文參考鏈接:

https://blog.csdn.net/laofashi2015/article/details/81111466

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM