broker.id |
|
必填參數,broker的唯一標識 |
log.dirs |
/tmp/kafka-logs |
Kafka數據存放的目錄。可以指定多個目錄,中間用逗號分隔,當新partition被創建的時會被存放到當前存放partition最少的目錄。 |
port |
9092 |
BrokerServer接受客戶端連接的端口號 |
zookeeper.connect |
null |
Zookeeper的連接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個或多個,為了提高可靠性,建議都填上。注意,此配置允許我們指定一個zookeeper路徑來存放此kafka集群的所有數據,為了與其他應用集群區分開,建議在此配置中指定本集群存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費者的參數要和此參數一致。 |
message.max.bytes |
1000000 |
服務器可以接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一致,否則會因為生產者生產的消息太大導致消費者無法消費。 |
num.io.threads |
8 |
服務器用來執行讀寫請求的IO線程數,此參數的數量至少要等於服務器上磁盤的數量。 |
queued.max.requests |
500 |
I/O線程可以處理請求的隊列大小,若實際請求數超過此大小,網絡線程將停止接收新的請求。 |
socket.send.buffer.bytes |
100 * 1024 |
The SO_SNDBUFF buffer the server prefers for socket connections. |
socket.receive.buffer.bytes |
100 * 1024 |
The SO_RCVBUFF buffer the server prefers for socket connections. |
socket.request.max.bytes |
100 * 1024 * 1024 |
服務器允許請求的最大值, 用來防止內存溢出,其值應該小於 Java heap size. |
num.partitions |
1 |
默認partition數量,如果topic在創建時沒有指定partition數量,默認使用此值,建議改為5 |
log.segment.bytes |
1024 * 1024 * 1024 |
Segment文件的大小,超過此值將會自動新建一個segment,此值可以被topic級別的參數覆蓋。 |
log.roll.{ms,hours} |
24 * 7 hours |
新建segment文件的時間,此值可以被topic級別的參數覆蓋。 |
log.retention.{ms,minutes,hours} |
7 days |
Kafka segment log的保存周期,保存周期超過此時間日志就會被刪除。此參數可以被topic級別參數覆蓋。數據量大時,建議減小此值。 |
log.retention.bytes |
-1 |
每個partition的最大容量,若數據量超過此值,partition數據將會被刪除。注意這個參數控制的是每個partition而不是topic。此參數可以被log級別參數覆蓋。 |
log.retention.check.interval.ms |
5 minutes |
刪除策略的檢查周期 |
auto.create.topics.enable |
true |
自動創建topic參數,建議此值設置為false,嚴格控制topic管理,防止生產者錯寫topic。 |
default.replication.factor |
1 |
默認副本數量,建議改為2。 |
replica.lag.time.max.ms |
10000 |
在此窗口時間內沒有收到follower的fetch請求,leader會將其從ISR(in-sync replicas)中移除。 |
replica.lag.max.messages |
4000 |
如果replica節點落后leader節點此值大小的消息數量,leader節點就會將其從ISR中移除。 |
replica.socket.timeout.ms |
30 * 1000 |
replica向leader發送請求的超時時間。 |
replica.socket.receive.buffer.bytes |
64 * 1024 |
The socket receive buffer for network requests to the leader for replicating data. |
replica.fetch.max.bytes |
1024 * 1024 |
The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
replica.fetch.wait.max.ms |
500 |
The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
num.replica.fetchers |
1 |
Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
fetch.purgatory.purge.interval.requests |
1000 |
The purge interval (in number of requests) of the fetch request purgatory. |
zookeeper.session.timeout.ms |
6000 |
ZooKeeper session 超時時間。如果在此時間內server沒有向zookeeper發送心跳,zookeeper就會認為此節點已掛掉。 此值太低導致節點容易被標記死亡;若太高,.會導致太遲發現節點死亡。 |
zookeeper.connection.timeout.ms |
6000 |
客戶端連接zookeeper的超時時間。 |
zookeeper.sync.time.ms |
2000 |
H ZK follower落后 ZK leader的時間。 |
controlled.shutdown.enable |
true |
允許broker shutdown。如果啟用,broker在關閉自己之前會把它上面的所有leaders轉移到其它brokers上,建議啟用,增加集群穩定性。 |
auto.leader.rebalance.enable |
true |
If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available. |
leader.imbalance.per.broker.percentage |
10 |
The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker. |
leader.imbalance.check.interval.seconds |
300 |
The frequency with which to check for leader imbalance. |
offset.metadata.max.bytes |
4096 |
The maximum amount of metadata to allow clients to save with their offsets. |
connections.max.idle.ms |
600000 |
Idle connections timeout: the server socket processor threads close the connections that idle more than this. |
num.recovery.threads.per.data.dir |
1 |
The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. |
unclean.leader.election.enable |
true |
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. |
delete.topic.enable |
false |
啟用deletetopic參數,建議設置為true。 |
offsets.topic.num.partitions |
50 |
The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200). |
offsets.topic.retention.minutes |
1440 |
Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic. |
offsets.retention.check.interval.ms |
600000 |
The frequency at which the offset manager checks for stale offsets. |
offsets.topic.replication.factor |
3 |
The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas. |
offsets.topic.segment.bytes |
104857600 |
Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads. |
offsets.load.buffer.size |
5242880 |
An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache. |
offsets.commit.required.acks |
-1 |
The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden. |
offsets.commit.timeout.ms |
5000 |
The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout. |