Hive過濾臟數據的一些經驗


如下文件需要處理,每個文件大概13G,其中字段以空格(32)分隔的7個字段;最麻煩的是中間有臟數據:

-rw-r--r-- 1 hadoop ifengdev 1895843464 May  6 14:56 feedback201503_201.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1896885848 May  6 14:59 feedback201503_202.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1891790676 May  6 15:00 feedback201503_203.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1894197100 May  6 15:01 feedback201503_204.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1894074074 May  6 15:02 feedback201503_205.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1829224750 May  6 16:13 feedback201504_201.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1831709571 May  6 16:14 feedback201504_202.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1824710879 May  6 16:30 feedback201504_203.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1827164031 May  6 16:31 feedback201504_204.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1827911208 May  6 16:31 feedback201504_205.tar.gz

直接Load進Hive報錯:

Loading data to table default.tmp_20150506
Failed with exception Wrong file format. Please check the file's format.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

沒辦法中間格式有問題:

網上說改變存儲格式可以避免報錯:

CREATE  TABLE tmp_20150506(
  dt string,
  unknown1 string,
  unknown2 string,
  reurl string,
  uid string,
  num1 int,
  num2 int)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '32'
  LINES TERMINATED BY '10'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.RCFileInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'

改為:

CREATE  TABLE tmp_20150506(
  dt string,
  unknown1 string,
  unknown2 string,
  reurl string,
  uid string,
  num1 int,
  num2 int)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '32'
  LINES TERMINATED BY '10'
STORED AS TEXTFILE;

確實不報錯了,根據具體需求也算一個方法;

 

最直接的方法:

zcat feedback201503_201.tar.gz|gawk -F ' ' 'NF==7 {print $1, "\t", $2, "\t", $3, "\t", $4, "\t", $5, F ' ' 'NF==7 {print $1, "\t", $2, "\t", $3, "\t", $4, "\t", $5, "\t", $6, "\t", $7}' >> feedback20150, "\t", $6, "\t", $7}' >> feedback201503_204.log

功能:替換空格為制表符;並且過濾字段不滿足要求的臟數據;

接着Load進Hive即可;

上述方法比較直接,但覺得“體力勞動“過多,可能我比較懶,所以相對喜歡下邊的方法:

基本思路就是把一行作為一個字段load進Hive,利用Hive本身篩選數據:

CREATE  TABLE tmp_20150506_raw(
  allfilds string
)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '10'
  LINES TERMINATED BY '10'
STORED AS TEXTFILE;
FIELDS TERMINATED BY '10'
LINES TERMINATED BY '10'
都設置成換行符即可,進入Hive以后使用Hive篩選數據即可。
篩選數據並存入另外一張表中,
本例的后續處理過程如下
from
(
from
(
select allfilds from tmp_20150506_raw where size(split(allfilds, ' ')) = 7) a
select split(allfilds, ' ')[0] as dt, split(allfilds, ' ')[1] as unknown1, split(allfilds, ' ')[2] as unknown2, split(allfilds, ' ')[3] as reurl, split(allfilds, ' ')[4] as uid, split(allfilds, ' ')[5] as num1, split(allfilds, ' ')[6] as num2) b
insert overwrite table tmp_20150506 partition(month = '2015-04')
select *




 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM