kafka集群擴容后的topic分區遷移


kafka集群擴容后,新的broker上面不會數據進入這些節點,也就是說,這些節點是空閑的;它只有在創建新的topic時才會參與工作。除非將已有的partition遷移到新的服務器上面;
所以需要將一些topic的分區遷移到新的broker上。

kafka-reassign-partitions.sh是kafka提供的用來重新分配partition和replica到broker上的工具
簡單實現重新分配需要三步:

  • 生成分配計划(generate)
  • 執行分配(execute)
  • 檢查分配的狀態(verify)

具體操作如下:

1. 生成分配計划

編寫分配腳本:
vi topics-to-move.json

內容如下:

{"topics": [{"topic":"event_request"}], "version": 1 }

執行分配計划生成腳本:

kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --topics-to-move-json-file topics-to-move.json --broker-list "5,6,7,8" --generate

執行結果如下:

[hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --topics-to-move-json-file topics-to-move.json --broker-list "5,6,7,8" --generate Current partition replica assignment #當前分區的副本分配 {"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[3,4]},{"topic":"event_request","partition":1,"replicas":[4,5]}]} Proposed partition reassignment configuration #建議的分區配置 {"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[6,5]},{"topic":"event_request","partition":1,"replicas":[7,6]}]}

Proposed partition reassignment configuration 后是根據命令行的指定的brokerlist生成的分區分配計划json格式。將 Proposed partition reassignment configuration的配置copy保存到一個文件中 topic-reassignment.json
vi topic-reassignment.json

{"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[6,5]},{"topic":"event_request","partition":1,"replicas":[7,6]}]}

2. 執行分配(execute)

根據step1 生成的分配計划配置json文件topic-reassignment.json,進行topic的重新分配。

kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --execute

執行前的分區分布:

[hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request Topic:event_request PartitionCount:2 ReplicationFactor:2 Configs:  Topic: event_request Partition: 0 Leader: 3 Replicas: 3,4 Isr: 3,4  Topic: event_request Partition: 1 Leader: 4 Replicas: 4,5 Isr: 4,5

執行后的分區分布:

[hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request Topic:event_request PartitionCount:2 ReplicationFactor:4 Configs:  Topic: event_request Partition: 0 Leader: 3 Replicas: 6,5,3,4 Isr: 3,4  Topic: event_request Partition: 1 Leader: 4 Replicas: 7,6,4,5 Isr: 4,5

3. 檢查分配的狀態

查看分配的狀態:正在進行

[hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --verify
Status of partition reassignment: Reassignment of partition [event_request,0] is still in progress Reassignment of partition [event_request,1] is still in progress [hadoop@sdf-nimbus-perf topic_reassgin]$ 

查看“is still in progress” 狀態時的分區,副本分布狀態:

發現Replicas有4個哦,說明在重新分配的過程中新舊的副本都在進行工作。

[hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request Topic:event_request PartitionCount:2 ReplicationFactor:4 Configs:  Topic: event_request Partition: 0 Leader: 3 Replicas: 6,5,3,4 Isr: 3,4  Topic: event_request Partition: 1 Leader: 4 Replicas: 7,6,4,5 Isr: 4,5

查看分配的狀態:分配完成。

[hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --verify Status of partition reassignment: Reassignment of partition [event_request,0] completed successfully Reassignment of partition [event_request,1] completed successfully

查看“completed successfully”狀態的分區,副本狀態:

已經按照生成的分配計划正確的完成了分區的重新分配。

[hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request Topic:event_request PartitionCount:2 ReplicationFactor:2 Configs:  Topic: event_request Partition: 0 Leader: 6 Replicas: 6,5 Isr: 6,5  Topic: event_request Partition: 1 Leader: 7 Replicas: 7,6 Isr: 6,7


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM