(二)Kafka動態增加Topic的副本(Replication)
1. 查看topic的原來的副本分布
[hadoop@sdf-nimbus-perf ~]$ le-kafka-topics.sh --describe --topic http_zhixin_line1
Topic:http_zhixin_line1 PartitionCount:3 ReplicationFactor:1 Configs:
Topic: http_zhixin_line1 Partition: 0 Leader: 7 Replicas: 7 Isr: 7
Topic: http_zhixin_line1 Partition: 1 Leader: 8 Replicas: 8 Isr: 8
Topic: http_zhixin_line1 Partition: 2 Leader: 9 Replicas: 9 Isr: 9
2. 增加Topic的副本的json文件的編寫
vi addReplicas.json
{
"version": 1,
"partitions": [
{
"topic": "http_zhixin_line1",
"partition": 0,
"replicas": [
7,
1,
2
]
},
{
"topic": "http_zhixin_line1",
"partition": 1,
"replicas": [
8,
2,
3
]
},
{
"topic": "http_zhixin_line1",
"partition": 2,
"replicas": [
9,
3,
4
]
}
]
}
3. 執行topic增加副本操作
kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file addReplicas.json --execute
[hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file addReplicas.json --execute
Current partition replica assignment
{"version":1,"partitions":[{"topic":"http_zhixin_line1","partition":2,"replicas":[9]},{"topic":"http_zhixin_line1","partition":1,"replicas":[8]},{"topic":"http_zhixin_line1","partition":0,"replicas":[7]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions {"version":1,"partitions":[{"topic":"http_zhixin_line1","partition":0,"replicas":[7,1,2]},{"topic":"http_zhixin_line1","partition":1,"replicas":[8,2,3]},{"topic":"http_zhixin_line1","partition":2,"replicas":[9,3,4]}]}
4. 查看執行的狀態
kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file addReplicas.json --verify
[hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file addReplicas.json --verify
Status of partition reassignment:
Reassignment of partition [http_zhixin_line1,0] completed successfully
Reassignment of partition [http_zhixin_line1,1] completed successfully
Reassignment of partition [http_zhixin_line1,2] completed successfully
5. 觀察日志目錄的數據同步情況,生產者,消費者的影響。
觀察partitions-0的數據同步情況,由於partitions-0 增加的兩個副本為1,2;
登陸 broker 1 和 broker 2所在的服務器;
cd 副本數據所在的目錄,觀察日志增長情況:
ls /data/hadoop/data*/kafka/log
borker1 和 broker2 所在日志目錄都有日志文件生成。borker1 和 borker2的副本數據是同時同步完成的。
觀察生產者:沒有影響。
觀察消費者:沒有影響。