參考鏈接:apache kafka系列之在zookeeper中存儲結構 http://blog.csdn.net/lizhitao/article/details/23744675
1.topic注冊信息
/brokers/topics/[topic] :
存儲某個topic的partitions所有分配信息
Schema:
"version": "版本編號目前固定為數字1", "partitions": { "partitionId編號": [ 同步副本組brokerId列表 ], "partitionId編號": [ 同步副本組brokerId列表 ], ....... } } Example:
{
"version": 1, "partitions": { "0": [1, 2], "1": [2, 1], "2": [1, 2], } }
說明:紫紅色為patitions編號,藍色為同步副本組brokerId列表
|
2.partition狀態信息
/brokers/topics/[topic]/partitions/[0...N] 其中[0..N]表示partition索引號
/brokers/topics/[topic]/partitions/[partitionId]/state
Schema:
"controller_epoch": 表示kafka集群中的中央控制器選舉次數, "leader": 表示該partition選舉leader的brokerId, "version": 版本編號默認為1, "leader_epoch": 該partition leader選舉次數, "isr": [同步副本組brokerId列表] } Example:
"controller_epoch": 1, "leader": 2, "version": 1, "leader_epoch": 0, "isr": [2, 1] } |
3. Broker注冊信息
/brokers/ids/[0...N]
每個broker的配置文件中都需要指定一個數字類型的id(全局不可重復),此節點為臨時znode(EPHEMERAL)
Schema:
{
"jmx_port": jmx端口號,
"timestamp": kafka broker初始啟動時的時間戳,
"host": 主機名或ip地址,
"version": 版本編號默認為1,
"port": kafka broker的服務端端口號,由server.properties中參數port確定
}
Example:
{
"jmx_port": 6061,
"timestamp":"1403061899859"
"version": 1,
"host": "192.168.1.148",
"port": 9092
}
4. Controller epoch:
/controller_epoch -> int (epoch)
此值為一個數字,kafka集群中第一個broker第一次啟動時為1,以后只要集群中center controller中央控制器所在broker變更或掛掉,就會重新選舉新的center controller,每次center controller變更controller_epoch值就會 + 1;
5. Controller注冊信息:
/controller -> int (broker id of the controller) 存儲center controller中央控制器所在kafka broker的信息
Schema:
{
"version": 版本編號默認為1,
"brokerid": kafka集群中broker唯一編號,
"timestamp": kafka broker中央控制器變更時的時間戳
}
Example:
{
"version": 1,
"brokerid": 3,
"timestamp": "1403061802981"
}
Consumer均衡算法
當一個group中,有consumer加入或者離開時,會觸發partitions均衡.均衡的最終目的,是提升topic的並發消費能力.
1) 假如topic1,具有如下partitions: P0,P1,P2,P3
2) 加入group中,有如下consumer: C0,C1
3) 首先根據partition索引號對partitions排序: P0,P1,P2,P3
4) 根據(consumer.id + '-'+ thread序號)排序: C0,C1
5) 計算倍數: M = [P0,P1,P2,P3].size / [C0,C1].size,本例值M=2(向上取整)
6) 然后依次分配partitions: C0 = [P0,P1],C1=[P2,P3],即Ci = [P(i * M),P((i + 1) * M -1)]
a.每個consumer客戶端被創建時,會向zookeeper注冊自己的信息;
b.此作用主要是為了"負載均衡".
c.同一個Consumer Group中的Consumers,Kafka將相應Topic中的每個消息只發送給其中一個Consumer。
d.Consumer Group中的每個Consumer讀取Topic的一個或多個Partitions,並且是唯一的Consumer;
e.一個Consumer group的多個consumer的所有線程依次有序地消費一個topic的所有partitions,如果Consumer group中所有consumer總線程大於partitions數量,則會出現空閑情況;
舉例說明:
kafka集群中創建一個topic為report-log 4 partitions 索引編號為0,1,2,3
假如有目前有三個消費者node:注意-->一個consumer中一個消費線程可以消費一個或多個partition.
如果每個consumer創建一個consumer thread線程,各個node消費情況如下,node1消費索引編號為0,1分區,node2費索引編號為2,node3費索引編號為3
如果每個consumer創建2個consumer thread線程,各個node消費情況如下(是從consumer node先后啟動狀態來確定的),node1消費索引編號為0,1分區;node2費索引編號為2,3;node3為空閑狀態
總結:
從以上可知,Consumer Group中各個consumer是根據先后啟動的順序有序消費一個topic的所有partitions的。
6. Consumer注冊信息:
每個consumer都有一個唯一的ID(consumerId可以通過配置文件指定,也可以由系統生成),此id用來標記消費者信息.
/consumers/[groupId]/ids/[consumerIdString]
是一個臨時的znode,此節點的值為請看consumerIdString產生規則,即表示此consumer目前所消費的topic + partitions列表.
consumerId產生規則:
StringconsumerUuid = null; if(config.consumerId!=null && config.consumerId) consumerUuid = consumerId; else { String uuid = UUID.randomUUID() consumerUuid = "%s-%d-%s".format( InetAddress.getLocalHost.getHostName, System.currentTimeMillis, uuid.getMostSignificantBits().toHexString.substring(0,8)); } String consumerIdString = config.groupId + "_" + consumerUuid;
Schema: { "version": 版本編號默認為1, "subscription": { //訂閱topic列表 "topic名稱": consumer中topic消費者線程數 }, "pattern": "static", "timestamp": "consumer啟動時的時間戳" } Example: { "version": 1, "subscription": { "open_platform_opt_push_plus1": 5 }, "pattern": "static", "timestamp": "1411294187842" }
7. Consumer owner:
/consumers/[groupId]/owners/[topic]/[partitionId] -> consumerIdString + threadId索引編號
當consumer啟動時,所觸發的操作:
a) 首先進行"Consumer Id注冊";
b) 然后在"Consumer id 注冊"節點下注冊一個watch用來監聽當前group中其他consumer的"退出"和"加入";只要此znode path下節點列表變更,都會觸發此group下consumer的負載均衡.(比如一個consumer失效,那么其他consumer接管partitions).
c) 在"Broker id 注冊"節點下,注冊一個watch用來監聽broker的存活情況;如果broker列表變更,將會觸發所有的groups下的consumer重新balance.
8. Consumer offset:
/consumers/[groupId]/offsets/[topic]/[partitionId] -> long (offset)
用來跟蹤每個consumer目前所消費的partition中最大的offset
此znode為持久節點,可以看出offset跟group_id有關,以表明當消費者組(consumer group)中一個消費者失效,
重新觸發balance,其他consumer可以繼續消費.
9. Re-assign partitions
/admin/reassign_partitions
{ "fields":[ { "name":"version", "type":"int", "doc":"version id" }, { "name":"partitions", "type":{ "type":"array", "items":{ "fields":[ { "name":"topic", "type":"string", "doc":"topic of the partition to be reassigned" }, { "name":"partition", "type":"int", "doc":"the partition to be reassigned" }, { "name":"replicas", "type":"array", "items":"int", "doc":"a list of replica ids" } ], } "doc":"an array of partitions to be reassigned to new replicas" } } ] } Example: { "version": 1, "partitions": [ { "topic": "Foo", "partition": 1, "replicas": [0, 1, 3] } ] }
10. Preferred replication election
/admin/preferred_replica_election
例子
{
"fields":[
{
"name":"version",
"type":"int",
"doc":"version id"
},
{
"name":"partitions",
"type":{
"type":"array",
"items":{
"fields":[
{
"name":"topic",
"type":"string",
"doc":"topic of the partition for which preferred replica election should be triggered"
},
{
"name":"partition",
"type":"int",
"doc":"the partition for which preferred replica election should be triggered"
}
],
}
"doc":"an array of partitions for which preferred replica election should be triggered"
}
}
]
}
例子:
{
"version": 1,
"partitions":
[
{
"topic": "Foo",
"partition": 1
},
{
"topic": "Bar",
"partition": 0
}
]
}
11. 刪除topics
/admin/delete_topics
例子
Schema:
{ "fields":
[ {"name": "version", "type": "int", "doc": "version id"},
{"name": "topics",
"type": { "type": "array", "items": "string", "doc": "an array of topics to be deleted"}
} ]
}
例子:
{
"version": 1,
"topics": ["foo", "bar"]
}
Topic配置
/config/topics/[topic_name]
例子
{
"version": 1,
"config": {
"config.a": "x",
"config.b": "y",
...
}
}
請注明轉載自:http://blog.csdn.NET/lizhitao/article/details/23744675