執行hadoop任務時報錯:
2019-06-05 03:23:36,173 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/flume/nginx/app1/2019-06-05/00/app1@flume23_10003_4.1559665890953.gz:0+0,/flume/nginx/app2/2019-06-05/00/app2@flume174_10003_9.1559665804394.gz:0+307548 2019-06-05 03:23:36,257 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:kwang (auth:SIMPLE) cause:java.io.EOFException: Unexpected end of input stream 2019-06-05 03:23:36,258 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.EOFException: Unexpected end of input stream at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:165) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105) at java.io.InputStream.read(InputStream.java:101) at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144) at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper.nextKeyValue(CombineFileRecordReaderWrapper.java:90) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:562) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
產生原因:
以上報錯原因基本都是由於HDFS上的文件異常結束導致的,通過查看log,發現/flume/nginx/app1/2019-06-05/00/app1@flume23_10003_4.1559665890953.gz:0+0 這個文件的大小為0字節。要理解空文件的產生,先需要清楚集群flume采集日志的邏輯,集群中采用flume采集到日志寫入到HDFS,采集過程中flume會先新建一個*.gz.tmp文件,flume持續的向*.gz.tmp文件中追加數據,在寫周期到達時將*.gz.tmp重命名為*.gz,而空文件產生的原因是flume新建了*.gz.tmp文件后,沒有新數據產生,重命名后文件大小為空。
解決辦法:
將空gz文件刪除后重新運行任務即可。
還有另一種原因,當設置了如下參數對文件進行分割切片時,導致部分文件為空文件,從而解析過程中出現異常。當然,這種原因取決於文件的原始格式是否可壓縮,像gz、snappy、lzo文件是不可分割的,設置如下參數就不會出現以上報錯,像orc文件是可分割的,設置如下參數可能會出現以上異常。
-Dmapreduce.input.fileinputformat.split.minsize=134217728 -Dmapreduce.input.fileinputformat.split.maxsize=512000000
【參考資料】
[1]. 菜菜光, hadoop Unexpected end of input stream 錯誤.