一、下載
下載地址:
http://kafka.apache.org/downloads.html 我這里下載的是Scala 2.11對應的 kafka_2.11-1.1.0.tgz
二、kafka安裝
集群規划
IP | 節點名稱 | Kafka | Zookeeper | Jdk | Scala |
192.168.100.21 | node21 | Kafka | Zookeeper | Jdk | Scala |
192.168.100.22 | node22 | Kafka | Zookeeper | Jdk | Scala |
192.168.100.23 | node23 | Kafka | Zookeeper | Jdk | Scala |
Zookeeper集群安裝參考: CentOS7.5搭建Zookeeper3.4.12集群與命令行操作
2.1 上傳解壓縮
[admin@node21 software]$ tar zxvf kafka_2.11-1.1.0.tgz -C /opt/module/
2.2 創建日志目錄
[admin@node21 software]$ cd /opt/module/kafka_2.11-1.1.0
[admin@node21 kafka_2.11-1.1.0]$ mkdir logs
2.3 修改配置文件
進入kafka的安裝配置目錄
[admin@node21 kafka_2.11-1.1.0]$ cd config/
[admin@node21 config]$ vi server.properties
主要關注:server.properties 這個文件即可,發現在配置目錄下也有Zookeeper文件,我們可以根據Kafka內置的zk集群來啟動,但是建議使用獨立的zk集群。
server.properties(broker.id和listeners每個節點都不相同)
#是否允許刪除topic,默認false不能手動刪除
delete.topic.enable=true #當前機器在集群中的唯一標識,和zookeeper的myid性質一樣 broker.id=0 #當前kafka服務偵聽的地址和端口,端口默認是9092 listeners = PLAINTEXT://192.168.100.21:9092 #這個是borker進行網絡處理的線程數 num.network.threads=3 #這個是borker進行I/O處理的線程數 num.io.threads=8 #發送緩沖區buffer大小,數據不是一下子就發送的,先會存儲到緩沖區到達一定的大小后在發送,能提高性能 socket.send.buffer.bytes=102400 #kafka接收緩沖區大小,當數據到達一定大小后在序列化到磁盤 socket.receive.buffer.bytes=102400 #這個參數是向kafka請求消息或者向kafka發送消息的請請求的最大數,這個值不能超過java的堆棧大小 socket.request.max.bytes=104857600 #消息日志存放的路徑 log.dirs=/opt/module/kafka_2.11-1.1.0/logs #默認的分區數,一個topic默認1個分區數 num.partitions=1 #每個數據目錄用來日志恢復的線程數目 num.recovery.threads.per.data.dir=1 #默認消息的最大持久化時間,168小時,7天 log.retention.hours=168 #這個參數是:因為kafka的消息是以追加的形式落地到文件,當超過這個值的時候,kafka會新起一個文件 log.segment.bytes=1073741824 #每隔300000毫秒去檢查上面配置的log失效時間 log.retention.check.interval.ms=300000 #是否啟用log壓縮,一般不用啟用,啟用的話可以提高性能 log.cleaner.enable=false #設置zookeeper的連接端口 zookeeper.connect=node21:2181,node22:2181,node23:2181 #設置zookeeper的連接超時時間 zookeeper.connection.timeout.ms=6000
2.4分發安裝包到其他節點
[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node22:/opt/module/
[admin@node21 module]$ scp -r kafka_2.11-1.1.0 admin@node23:/opt/module/
修改node22,node23節點kafka配置文件conf/server.properties里面的broker.id和listeners的值。
2.5 添加環境變量
[admin@node21 module]$ vi /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka_2.11-1.1.0
export PATH=$PATH:$KAFKA_HOME/bin
保存使其立即生效
[admin@node21 module]$ source /etc/profile
三、kafka集群啟動
3.1 首先啟動zookeeper集群
所有zookeeper節點都需要執行
[admin@node21 ~]$ zkServer.sh start
3.2 后台啟動Kafka集群服務
所有Kafka節點都需要執行
[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-start.sh config/server.properties &
四、kafka命令行操作
kafka-broker-list:node21:9092,node22:9092,node23:9092
zookeeper.connect-list: node21:2181,node22:2181,node23:2181
4.1 創建新topic
[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh
Create, delete, describe, or change a topic.
Option Description
------ -----------
--alter Alter the number of partitions, replica assignment, and/or configuration for the topic. --config <String: name=value> A topic configuration override for the topic being created or altered.The following is a list of valid configurations: cleanup.policy compression.type delete.retention.ms file.delete.delay.ms flush.messages flush.ms follower.replication.throttled. replicas index.interval.bytes leader.replication.throttled.replicas max.message.bytes message.format.version message.timestamp.difference.max.ms message.timestamp.type min.cleanable.dirty.ratio min.compaction.lag.ms min.insync.replicas preallocate retention.bytes retention.ms segment.bytes segment.index.bytes segment.jitter.ms segment.ms unclean.leader.election.enable See the Kafka documentation for full details on the topic configs. --create Create a new topic. --delete Delete a topic --delete-config <String: name> A topic configuration override to be removed for an existing topic (see the list of configurations under the --config option). --describe List details for the given topics. --disable-rack-aware Disable rack aware replica assignment --force Suppress console prompts --help Print usage information. --if-exists if set when altering or deleting topics, the action will only execute if the topic exists --if-not-exists if set when creating topics, the action will only execute if the topic does not already exist --list List all available topics. --partitions <Integer: # of partitions> The number of partitions for the topic being created or altered (WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected --replica-assignment <String: A list of manual partition-to-broker broker_id_for_part1_replica1 : assignments for the topic being broker_id_for_part1_replica2 , created or altered. broker_id_for_part2_replica1 : broker_id_for_part2_replica2 , ...> --replication-factor <Integer: The replication factor for each replication factor> partition in the topic being created. --topic <String: topic> The topic to be create, alter or describe. Can also accept a regular expression except for --create option --topics-with-overrides if set when describing topics, only show topics that have overridden configs --unavailable-partitions if set when describing topics, only show partitions whose leader is not available --under-replicated-partitions if set when describing topics, only show under replicated partitions --zookeeper <String: hosts> REQUIRED: The connection string for the zookeeper connection in the form host:port. Multiple hosts can be given to allow fail-over.
在node21節點上創建一個新的Topic
[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --create --zookeeper node21:2181,node22:2181,node23:2181 --replication-factor 3 --partitions 3 --topic TestTopic
選項說明:
--topic 定義topic名
--replication-factor 定義副本數
--partitions 定義分區數
4.2 查看topic副本信息
[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --describe --zookeeper node21:2181,node22:2181,node23:2181 --topic TestTopic
4.3 查看已經創建的topic信息
[admin@node21 kafka_2.11-1.1.0]$ kafka-topics.sh --list --zookeeper node21:2181,node22:2181,node23:2181
4.4 測試生產者發送消息
[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh
Read data from standard input and publish it to Kafka. Option Description ------ ----------- --batch-size <Integer: size> Number of messages to send in a single batch if they are not being sent synchronously. (default: 200) --broker-list <String: broker-list> REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2. --compression-codec [String: The compression codec: either 'none', compression-codec] 'gzip', 'snappy', or 'lz4'.If specified without value, then it defaults to 'gzip' --key-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing keys. (default: kafka. serializer.DefaultEncoder) --line-reader <String: reader_class> The class name of the class to use for reading lines from standard in. By default each line is read as a separate message. (default: kafka. tools. ConsoleProducer$LineMessageReader) --max-block-ms <Long: max block on The max time that the producer will send> block for during a send request (default: 60000) --max-memory-bytes <Long: total memory The total memory used by the producer in bytes> to buffer records waiting to be sent to the server. (default: 33554432) --max-partition-memory-bytes <Long: The buffer size allocated for a memory in bytes per partition> partition. When records are received which are smaller than this size the producer will attempt to optimistically group them together until this size is reached. (default: 16384) --message-send-max-retries <Integer> Brokers can fail receiving the message for multiple reasons, and being unavailable transiently is just one of them. This property specifies the number of retires before the producer give up and drop this message. (default: 3) --metadata-expiry-ms <Long: metadata The period of time in milliseconds expiration interval> after which we force a refresh of metadata even if we haven't seen any leadership changes. (default: 300000) --old-producer Use the old producer implementation. --producer-property <String: A mechanism to pass user-defined producer_prop> properties in the form key=value to the producer. --producer.config <String: config file> Producer config properties file. Note that [producer-property] takes precedence over this config. --property <String: prop> A mechanism to pass user-defined properties in the form key=value to the message reader. This allows custom configuration for a user- defined message reader. --queue-enqueuetimeout-ms <Integer: Timeout for event enqueue (default: queue enqueuetimeout ms> 2147483647) --queue-size <Integer: queue_size> If set and the producer is running in asynchronous mode, this gives the maximum amount of messages will queue awaiting sufficient batch size. (default: 10000) --request-required-acks <String: The required acks of the producer request required acks> requests (default: 1) --request-timeout-ms <Integer: request The ack timeout of the producer timeout ms> requests. Value must be non-negative and non-zero (default: 1500) --retry-backoff-ms <Integer> Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. (default: 100) --socket-buffer-size <Integer: size> The size of the tcp RECV size. (default: 102400) --sync If set message send requests to the brokers are synchronously, one at a time as they arrive. --timeout <Integer: timeout_ms> If set and the producer is running in asynchronous mode, this gives the maximum amount of time a message will queue awaiting sufficient batch size. The value is given in ms. (default: 1000) --topic <String: topic> REQUIRED: The topic id to produce messages to. --value-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing values. (default: kafka. serializer.DefaultEncoder)
在node21上生產消息
[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-console-producer.sh --broker-list node21:9092,node22:9092,node23:9092 --topic TestTopic
4.5 測試消費者消費消息
[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-console-consumer.sh
The console consumer is a tool that reads data from Kafka and outputs it to standard output. Option Description ------ ----------- --blacklist <String: blacklist> Blacklist of topics to exclude from consumption. --bootstrap-server <String: server to REQUIRED (unless old consumer is connect to> used): The server to connect to. --consumer-property <String: A mechanism to pass user-defined consumer_prop> properties in the form key=value to the consumer. --consumer.config <String: config file> Consumer config properties file. Note that [consumer-property] takes precedence over this config. --csv-reporter-enabled If set, the CSV metrics reporter will be enabled --delete-consumer-offsets If specified, the consumer path in zookeeper is deleted when starting up --enable-systest-events Log lifecycle events of the consumer in addition to logging consumed messages. (This is specific for system tests.) --formatter <String: class> The name of a class to use for formatting kafka messages for display. (default: kafka.tools. DefaultMessageFormatter) --from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. --group <String: consumer group id> The consumer group id of the consumer. --isolation-level <String> Set to read_committed in order to filter out transactional messages which are not committed. Set to read_uncommittedto read all messages. (default: read_uncommitted) --key-deserializer <String: deserializer for key> --max-messages <Integer: num_messages> The maximum number of messages to consume before exiting. If not set, consumption is continual. --metrics-dir <String: metrics If csv-reporter-enable is set, and directory> this parameter isset, the csv metrics will be output here --new-consumer Use the new consumer implementation. This is the default, so this option is deprecated and will be removed in a future release. --offset <String: consume offset> The offset id to consume from (a non- negative number), or 'earliest' which means from beginning, or 'latest' which means from end (default: latest) --partition <Integer: partition> The partition to consume from. Consumption starts from the end of the partition unless '--offset' is specified. --property <String: prop> The properties to initialize the message formatter. --skip-message-on-error If there is an error when processing a message, skip it instead of halt. --timeout-ms <Integer: timeout_ms> If specified, exit if no message is available for consumption for the specified interval. --topic <String: topic> The topic id to consume on. --value-deserializer <String: deserializer for values> --whitelist <String: whitelist> Whitelist of topics to include for consumption. --zookeeper <String: urls> REQUIRED (only when using old consumer): The connection string for the zookeeper connection in the form host:port. Multiple URLS can be given to allow fail-over.
在node22上消費消息(舊命令操作)
[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --zookeeper node21:2181,node22:2181,node23:2181 --from-beginning --topic TestTopic
--from-beginning:會把TestTopic主題中以往所有的數據都讀取出來。根據業務場景選擇是否增加該配置。
新消費者命令
[admin@node22 kafka_2.11-1.1.0]$ kafka-console-consumer.sh --bootstrap-server node21:9092,node22:9092,node23:9092 --from-beginning --topic TestTopic
4.6刪除topic
[admin@node22 kafka_2.11-1.1.0]$ bin/kafka-topics.sh --zookeeper node21:2181,node22:2181,node23:2181 --delete --topic TestTopic
需要server.properties中設置delete.topic.enable=true否則只是標記刪除或者直接重啟。
4.7 停止Kafka服務
[admin@node21 kafka_2.11-1.1.0]$ bin/kafka-server-stop.sh stop
4.8 編寫kafka啟動腳本
[admin@node21 kafka_2.11-1.1.0]$ cd bin
[admin@node21 bin]$ vi start-kafka.sh #!/bin/bash nohup /opt/module/kafka_2.11-1.1.0/bin/kafka-server-start.sh /opt/module/kafka_2.11-1.1.0/config/server.properties >/opt/module/kafka_2.11-1.1.0/logs/kafka.log 2>&1 &
賦權限給腳本:chmod +x start-kafka.sh
五、Kafka配置信息詳解
5.1 Broker配置信息
屬性 |
默認值 |
描述 |
broker.id |
|
必填參數,broker的唯一標識 |
log.dirs |
/tmp/kafka-logs |
Kafka數據存放的目錄。可以指定多個目錄,中間用逗號分隔,當新partition被創建的時會被存放到當前存放partition最少的目錄。 |
port |
9092 |
BrokerServer接受客戶端連接的端口號 |
zookeeper.connect |
null |
Zookeeper的連接串,格式為:hostname1:port1,hostname2:port2,hostname3:port3。可以填一個或多個,為了提高可靠性,建議都填上。注意,此配置允許我們指定一個zookeeper路徑來存放此kafka集群的所有數據,為了與其他應用集群區分開,建議在此配置中指定本集群存放目錄,格式為:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消費者的參數要和此參數一致。 |
message.max.bytes |
1000000 |
服務器可以接收到的最大的消息大小。注意此參數要和consumer的maximum.message.size大小一致,否則會因為生產者生產的消息太大導致消費者無法消費。 |
num.io.threads |
8 |
服務器用來執行讀寫請求的IO線程數,此參數的數量至少要等於服務器上磁盤的數量。 |
queued.max.requests |
500 |
I/O線程可以處理請求的隊列大小,若實際請求數超過此大小,網絡線程將停止接收新的請求。 |
socket.send.buffer.bytes |
100 * 1024 |
The SO_SNDBUFF buffer the server prefers for socket connections. |
socket.receive.buffer.bytes |
100 * 1024 |
The SO_RCVBUFF buffer the server prefers for socket connections. |
socket.request.max.bytes |
100 * 1024 * 1024 |
服務器允許請求的最大值, 用來防止內存溢出,其值應該小於 Java heap size. |
num.partitions |
1 |
默認partition數量,如果topic在創建時沒有指定partition數量,默認使用此值,建議改為5 |
log.segment.bytes |
1024 * 1024 * 1024 |
Segment文件的大小,超過此值將會自動新建一個segment,此值可以被topic級別的參數覆蓋。 |
log.roll.{ms,hours} |
24 * 7 hours |
新建segment文件的時間,此值可以被topic級別的參數覆蓋。 |
log.retention.{ms,minutes,hours} |
7 days |
Kafka segment log的保存周期,保存周期超過此時間日志就會被刪除。此參數可以被topic級別參數覆蓋。數據量大時,建議減小此值。 |
log.retention.bytes |
-1 |
每個partition的最大容量,若數據量超過此值,partition數據將會被刪除。注意這個參數控制的是每個partition而不是topic。此參數可以被log級別參數覆蓋。 |
log.retention.check.interval.ms |
5 minutes |
刪除策略的檢查周期 |
auto.create.topics.enable |
true |
自動創建topic參數,建議此值設置為false,嚴格控制topic管理,防止生產者錯寫topic。 |
default.replication.factor |
1 |
默認副本數量,建議改為2。 |
replica.lag.time.max.ms |
10000 |
在此窗口時間內沒有收到follower的fetch請求,leader會將其從ISR(in-sync replicas)中移除。 |
replica.lag.max.messages |
4000 |
如果replica節點落后leader節點此值大小的消息數量,leader節點就會將其從ISR中移除。 |
replica.socket.timeout.ms |
30 * 1000 |
replica向leader發送請求的超時時間。 |
replica.socket.receive.buffer.bytes |
64 * 1024 |
The socket receive buffer for network requests to the leader for replicating data. |
replica.fetch.max.bytes |
1024 * 1024 |
The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
replica.fetch.wait.max.ms |
500 |
The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
num.replica.fetchers |
1 |
Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
fetch.purgatory.purge.interval.requests |
1000 |
The purge interval (in number of requests) of the fetch request purgatory. |
zookeeper.session.timeout.ms |
6000 |
ZooKeeper session 超時時間。如果在此時間內server沒有向zookeeper發送心跳,zookeeper就會認為此節點已掛掉。 此值太低導致節點容易被標記死亡;若太高,.會導致太遲發現節點死亡。 |
zookeeper.connection.timeout.ms |
6000 |
客戶端連接zookeeper的超時時間。 |
zookeeper.sync.time.ms |
2000 |
H ZK follower落后 ZK leader的時間。 |
controlled.shutdown.enable |
true |
允許broker shutdown。如果啟用,broker在關閉自己之前會把它上面的所有leaders轉移到其它brokers上,建議啟用,增加集群穩定性。 |
auto.leader.rebalance.enable |
true |
If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available. |
leader.imbalance.per.broker.percentage |
10 |
The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker. |
leader.imbalance.check.interval.seconds |
300 |
The frequency with which to check for leader imbalance. |
offset.metadata.max.bytes |
4096 |
The maximum amount of metadata to allow clients to save with their offsets. |
connections.max.idle.ms |
600000 |
Idle connections timeout: the server socket processor threads close the connections that idle more than this. |
num.recovery.threads.per.data.dir |
1 |
The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. |
unclean.leader.election.enable |
true |
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. |
delete.topic.enable |
false |
啟用deletetopic參數,建議設置為true。 |
offsets.topic.num.partitions |
50 |
The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200). |
offsets.topic.retention.minutes |
1440 |
Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic. |
offsets.retention.check.interval.ms |
600000 |
The frequency at which the offset manager checks for stale offsets. |
offsets.topic.replication.factor |
3 |
The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas. |
offsets.topic.segment.bytes |
104857600 |
Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads. |
offsets.load.buffer.size |
5242880 |
An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache. |
offsets.commit.required.acks |
-1 |
The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden. |
offsets.commit.timeout.ms |
5000 |
The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout. |
5.2 Producer配置信息
屬性 |
默認值 |
描述 |
metadata.broker.list |
|
啟動時producer查詢brokers的列表,可以是集群中所有brokers的一個子集。注意,這個參數只是用來獲取topic的元信息用,producer會從元信息中挑選合適的broker並與之建立socket連接。格式是:host1:port1,host2:port2。 |
request.required.acks |
0 |
參見3.2節介紹 |
request.timeout.ms |
10000 |
Broker等待ack的超時時間,若等待時間超過此值,會返回客戶端錯誤信息。 |
producer.type |
sync |
同步異步模式。async表示異步,sync表示同步。如果設置成異步模式,可以允許生產者以batch的形式push數據,這樣會極大的提高broker性能,推薦設置為異步。 |
serializer.class |
kafka.serializer.DefaultEncoder |
序列號類,.默認序列化成 byte[] 。 |
key.serializer.class |
|
Key的序列化類,默認同上。 |
partitioner.class |
kafka.producer.DefaultPartitioner |
Partition類,默認對key進行hash。 |
compression.codec |
none |
指定producer消息的壓縮格式,可選參數為: “none”, “gzip” and “snappy”。關於壓縮參見4.1節 |
compressed.topics |
null |
啟用壓縮的topic名稱。若上面參數選擇了一個壓縮格式,那么壓縮僅對本參數指定的topic有效,若本參數為空,則對所有topic有效。 |
message.send.max.retries |
3 |
Producer發送失敗時重試次數。若網絡出現問題,可能會導致不斷重試。 |
retry.backoff.ms |
100 |
Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. |
topic.metadata.refresh.interval.ms |
600 * 1000 |
The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed |
queue.buffering.max.ms |
5000 |
啟用異步模式時,producer緩存消息的時間。比如我們設置成1000時,它會緩存1秒的數據再一次發送出去,這樣可以極大的增加broker吞吐量,但也會造成時效性的降低。 |
queue.buffering.max.messages |
10000 |
采用異步模式時producer buffer 隊列里最大緩存的消息數量,如果超過這個數值,producer就會阻塞或者丟掉消息。 |
queue.enqueue.timeout.ms |
-1 |
當達到上面參數值時producer阻塞等待的時間。如果值設置為0,buffer隊列滿時producer不會阻塞,消息直接被丟掉。若值設置為-1,producer會被阻塞,不會丟消息。 |
batch.num.messages |
200 |
采用異步模式時,一個batch緩存的消息數量。達到這個數量值時producer才會發送消息。 |
send.buffer.bytes |
100 * 1024 |
Socket write buffer size |
client.id |
“” |
The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. |
5.3 Consumer配置信息
屬性 |
默認值 |
描述 |
group.id |
|
Consumer的組ID,相同goup.id的consumer屬於同一個組。 |
zookeeper.connect |
|
Consumer的zookeeper連接串,要和broker的配置一致。 |
consumer.id |
null |
如果不設置會自動生成。 |
socket.timeout.ms |
30 * 1000 |
網絡請求的socket超時時間。實際超時時間由max.fetch.wait + socket.timeout.ms 確定。 |
socket.receive.buffer.bytes |
64 * 1024 |
The socket receive buffer for network requests. |
fetch.message.max.bytes |
1024 * 1024 |
查詢topic-partition時允許的最大消息大小。consumer會為每個partition緩存此大小的消息到內存,因此,這個參數可以控制consumer的內存使用量。這個值應該至少比server允許的最大消息大小大,以免producer發送的消息大於consumer允許的消息。 |
num.consumer.fetchers |
1 |
The number fetcher threads used to fetch data. |
auto.commit.enable |
true |
如果此值設置為true,consumer會周期性的把當前消費的offset值保存到zookeeper。當consumer失敗重啟之后將會使用此值作為新開始消費的值。 |
auto.commit.interval.ms |
60 * 1000 |
Consumer提交offset值到zookeeper的周期。 |
queued.max.message.chunks |
2 |
用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。 |
auto.commit.interval.ms |
60 * 1000 |
Consumer提交offset值到zookeeper的周期。 |
queued.max.message.chunks |
2 |
用來被consumer消費的message chunks 數量, 每個chunk可以緩存fetch.message.max.bytes大小的數據量。 |
fetch.min.bytes |
1 |
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. |
fetch.wait.max.ms |
100 |
The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes. |
rebalance.backoff.ms |
2000 |
Backoff time between retries during rebalance. |
refresh.leader.backoff.ms |
200 |
Backoff time to wait before trying to determine the leader of a partition that has just lost its leader. |
auto.offset.reset |
largest |
What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer |
consumer.timeout.ms |
-1 |
若在指定時間內沒有消息消費,consumer將會拋出異常。 |
exclude.internal.topics |
true |
Whether messages from internal topics (such as offsets) should be exposed to the consumer. |
zookeeper.session.timeout.ms |
6000 |
ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur. |
zookeeper.connection.timeout.ms |
6000 |
The max time that the client waits while establishing a connection to zookeeper. |
zookeeper.sync.time.ms |
2000 |
How far a ZK follower can be behind a ZK leader
|