kafka max.poll.interval.ms配置太短


max.poll.interval.ms這個應該是消費者每次去kafka拉取數據最大間隔,如果超過這個間隔,服務端會認為消費者已離線。觸發rebalance

demo
 1     public ConsumerDemo(String topicName) {
 2         Properties props = new Properties();
 3         props.put("bootstrap.servers", "localhost:9092");
 4         props.put("group.id", GROUPID);
 5         props.put("enable.auto.commit", "false");
 6         props.put("max.poll.interval.ms", "1000");
 7         props.put("auto.offset.reset", "earliest");
 8         props.put("key.deserializer", StringDeserializer.class.getName());
 9         props.put("value.deserializer", StringDeserializer.class.getName());
10         this.consumer = new KafkaConsumer<String, String>(props);
11         this.topic = topicName;
12         this.consumer.subscribe(Arrays.asList(topic));
13     }

5行配置自動提交為false,手動提交。6行配置 max.poll.interval.ms為1秒

 1     public void receiveMsg() {
 2         int messageNo = 1;
 3         System.out.println("---------開始消費---------");
 4         try {
 5             for (;;) {
 6                 msgList = consumer.poll(1000);
 7                 System.out.println("start sleep" + System.currentTimeMillis() / 1000);
 8                 Thread.sleep(10000);
 9                 System.out.println("end sleep" + System.currentTimeMillis() / 1000);
10                 if(null!=msgList&&msgList.count()>0){
11                     for (ConsumerRecord<String, String> record : msgList) {
12                         System.out.println(messageNo+"=======receive: key = " + record.key() + ", value = " + record.value()+" offset==="+record.offset());
13                     }
14                 }else{
15                     Thread.sleep(1000);
16                 }
17                 consumer.commitSync();
18             }
19         } catch (InterruptedException e) {
20             e.printStackTrace();
21         } finally {
22             consumer.close();
23         }
24     }

8行slepp 10秒,模擬處理消息耗時。提交消息的時候報錯

Exception in thread "main" org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:722)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:600)
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1211)
    at com.gxf.kafka.ConsumerDemo.receiveMsg(ConsumerDemo.java:49)
    at com.gxf.kafka.ConsumerDemo.main(ConsumerDemo.java:59)

max.poll.interval.ms 可以配置稍微大點,或者減少處理時間,每次少拉取數據,異步處理等


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM