根據時間戳獲取kafka的topic的偏移量,結果獲取的偏移量量數據組的長度為0,就會出現如下的數組下標越界的異常,實現的原理是使用了kafka的getOffsetsBefore()方法:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException : 0
at co.gridport.kafka.hadoop.KafkaInputFetcher.getOffset(KafkaInputFetcher.java:126)
at co.gridport.kafka.hadoop.TestKafkaInputFetcher.testGetOffset(TestKafkaInputFetcher.java:68)
at co.gridport.kafka.hadoop.TestKafkaInputFetcher.main(TestKafkaInputFetcher.java:80)
OffsetResponse(0,Map([barrage_detail,0] -> error: kafka.common.UnknownException offsets: ))
源碼如下:
/* * 得到partition的offset Finding Starting Offset for Reads */ public Long getOffset(Long time) throws IOException { TopicAndPartition topicAndPartition = new TopicAndPartition(this.topic , this.partition ); Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>(); requestInfo.put( topicAndPartition, new PartitionOffsetRequestInfo(time, 1)); kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest( requestInfo, kafka.api.OffsetRequest.CurrentVersion(), this. client_id); OffsetResponse response = this. kafka_consumer.getOffsetsBefore( request); if ( response.hasError()) { log.error( "Error fetching data Offset Data the Broker. Reason: " + response.errorCode(this.topic, this. partition)); throw new IOException ( "Error fetching kafka Offset by time:" + response.errorCode(this.topic, this. partition)); } // if (response.offsets(this.topic, this.partition).length == 0){ // return getOffset(kafka.api.OffsetRequest // .EarliestTime()); // } return response.offsets( this. topic, this. partition)[0]; } |
返回的response對象會有error: kafka.common.UnknownException offsets如下異常:
OffsetResponse(0,Map([barrage_detail,0] -> error: kafka.common.UnknownException offsets: ))
同時呢,response.hasError()檢查不到error。
是什么原因造成了response.offsets(this.topic,this.partition)的返回數組的長度為0呢?
分析了getOffsetsBefore()方法的源碼,並做源碼了大量的測試,終於重現了這種情況?
1.getOffsetsBefore()的功能以及實現原理:
getOffsetsBefore的功能是返回某個時間點前的maxOffsetNum個offset(時間點指的是kafka日志文件的最后修改時間,offset指的是log文件名中的offset,這個offset是該日志文件的第一條記錄的offset,即base offset;maxNumOffsets參數即返回結果的最大個數,如果該參數為2,就返回指定時間點前的2個offset,如果是負數,就報邏輯錯誤,怎么能聲明一個長度為負數的數組呢,呵呵);
根據這個實現原理,所以返回的結果長度為0是合理的,反映的是這個時間點前沒有kafka日志這種情況,該情況自然就沒有offset了。
說明我們指定的時間參數太早了,正常的時間范圍為:最早的時間之后的時間參數都可以有返回值。
其實合理的處理方式應該為如果這個時間點前沒有值,就返回最早的offset了,對api的使用者就友好多了我們可以自己來實現這個功能。
代碼如下:
/* * 得到partition的offset Finding Starting Offset for Reads */ public Long getOffset(Long time ) throws IOException { TopicAndPartition topicAndPartition = new TopicAndPartition(this .topic , this.partition); Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>(); requestInfo.put( topicAndPartition, new PartitionOffsetRequestInfo(time, 1)); kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest( requestInfo, kafka.api.OffsetRequest.CurrentVersion(), this. client_id); OffsetResponse response = this. kafka_consumer.getOffsetsBefore( request); if ( response.hasError()) { log.error( "Error fetching data Offset Data the Broker. Reason: " + response.errorCode(this.topic, this. partition)); throw new IOException ( "Error fetching kafka Offset by time:" + response.errorCode(this.topic, this. partition)); } //如果返回的數據長度為0,就獲取最早的offset。 if ( response.offsets( this. topic, this. partition). length == 0){ return getOffset(kafka.api.OffsetRequest . EarliestTime()); } return response.offsets( this. topic, this. partition)[0]; } |