kafka中Preferred Leader設置


https://www.pianshen.com/article/12231023160/

https://www.pianshen.com/article/12231023160/

broker重啟后可能會有異常,比如Preferred Leader由true變為flase

在創建一個topic時,kafka盡量將partition均分在所有的brokers上,並且將replicas也j均分在不同的broker上。

每個partitiion的所有replicas叫做"assigned replicas","assigned replicas"中的第一個replicas叫"preferred replica",剛創建的topic一般"preferred replica"是leader。leader replica負責所有的讀寫。

但隨着時間推移,broker可能會停機,會導致leader遷移,導致機群的負載不均衡。我們期望對topic的leader進行重新負載均衡,讓partition選擇"preferred replica"做為leader。

 

用kafka-eagle監控kafka運行狀況,分區3所在的broker異常重啟了。截圖看看,分區3的Preferred Leader為false,由於Replicas為[1,3,4],leader為3,由於leader為副本3,副本3不是Replicas里的第一個副本(副本1),所以Preferred Leader為false。

有兩種方法可以讓Preferred Leader恢復為true

  • 方法一:修改leader為副本1
  • 方法二:調整Replicas的副本集順序為[3,1,4]

我們用方法二試試

 #cat move.json    
{
    "partitions": [
    {
        "topic": "data_report_h5_merged_app",
        "partition": 3,
        "replicas": [
            3,
            1,
            4
        ]
    }]
}

 
 

 

#

/usr/local/xyhadoop/kafka/bin/kafka-reassign-partitions.sh --zookeeper localhost:2181/kafka  --reassignment-json-file  move.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"data_report_h5_merged_app","partition":2,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":4,"replicas":[3,4,5],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":1,"replicas":[5,1,2],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":5,"replicas":[4,1,2],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":7,"replicas":[1,3,4],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":0,"replicas":[4,5,1],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":3,"replicas":[1,3,4],"log_dirs":["any","any","any"]},{"topic":"data_report_h5_merged_app","partition":6,"replicas":[5,2,3],"log_dirs":["any","any","any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

再看看kafka-eagle,已經 Preferred Leader恢復為true

 

案例2:

對所有Topics進行操作

./bin/kafka-preferred-replica-election.sh --zookeeper hadoop16:2181,hadoop17:2181,hadoop18:2181/kafka08

 

對某個Topic進行操作

[sankuai@data-kafka01 kafka]$ cat topicPartitionList.json

{

 "partitions":

  [

    {"topic":"test.example","partition": "0"}

  ]

}

 

./bin/kafka-preferred-replica-election.sh --zookeeper hadoop16:2181,hadoop17:2181,hadoop18:2181/kafka08 --path-to-json-file topicPartitionList.json


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM