作者:過往記憶 | 新浪微博:左手牽右手TEL |
可以轉載, 但必須以超鏈接形式標明文章原始出處和作者信息及版權聲明
博客地址:http://www.iteblog.com/
文章標題:《在Hive中使用Avro》
本文鏈接:http://www.iteblog.com/archives/1007
Hadoop、Hive、Hbase、Flume等QQ交流群:138615359(已滿),請加入新群:149892483
本博客的微信公共帳號為:iteblog_hadoop,歡迎大家關注。
如果你覺得本文對你有幫助,不妨分享一次,你的每次支持,都是對我最大的鼓勵
如果本文的內容對您的學習和工作有所幫助,不妨支付寶贊助(wyphao.2007@163.com)一下
Avro(讀音類似於[ævrə])是Hadoop的一個子項目,由Hadoop的創始人Doug Cutting牽頭開發。Avro是一個數據序列化系統,設計用於支持大批量數據交換的應用。它的主要特點有:支持二進制序列化方式,可以便捷,快速地處理大量數據;動態語言友好,Avro提供的機制使動態語言可以方便地處理Avro數據。
在Hive中,我們可以將數據使用Avro格式存儲,本文以avro-1.7.1.jar為例,進行說明。
為了解析Avro格式的數據,我們可以在Hive建表的時候用下面語句:
01 |
hive> CREATE EXTERNAL TABLE tweets |
02 |
> COMMENT "A table backed by Avro data with the |
03 |
> Avro schema embedded in the CREATE TABLE statement" |
04 |
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' |
05 |
> STORED AS |
06 |
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' |
07 |
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' |
08 |
> LOCATION '/user/wyp/examples/input/' |
09 |
> TBLPROPERTIES ( |
10 |
> 'avro.schema.literal'='{ |
11 |
> "type": "record", |
12 |
> "name": "Tweet", |
13 |
> "namespace": "com.miguno.avro", |
14 |
> "fields": [ |
15 |
> { "name":"username", "type":"string"}, |
16 |
> { "name":"tweet", "type":"string"}, |
17 |
> { "name":"timestamp", "type":"long"} |
18 |
> ] |
19 |
> }' |
20 |
> ); |
21 |
OK |
22 |
Time taken: 0.076 seconds |
23 |
24 |
hive> describe tweets; |
25 |
OK |
26 |
username string from deserializer |
27 |
tweet string from deserializer |
28 |
timestamp bigint from deserializer |
然后用Snappy壓縮我們需要的數據,下面是壓縮前我們的數據:
01 |
{ |
02 |
"username": "miguno", |
03 |
"tweet": "Rock: Nerf paper, scissors is fine.", |
04 |
"timestamp": 1366150681 |
05 |
}, |
06 |
{ |
07 |
"username": "BlizzardCS", |
08 |
"tweet": "Works as intended. Terran is IMBA.", |
09 |
"timestamp": 1366154481 |
10 |
}, |
11 |
{ |
12 |
"username": "DarkTemplar", |
13 |
"tweet": "From the shadows I come!", |
14 |
"timestamp": 1366154681 |
15 |
}, |
16 |
{ |
17 |
"username": "VoidRay", |
18 |
"tweet": "Prismatic core online!", |
19 |
"timestamp": 1366160000 |
20 |
} |
壓縮完的數據假如存放在/home/wyp/twitter.avsc文件中,我們將這個數據復制到HDFS中的/user/wyp/examples/input/目錄下:
1 |
hadoop fs -put /home/wyp/twitter.avro /user/wyp/examples/input/ |
然后我們就可以在Hive中使用了:
1 |
hive> select * from tweets limit 5;; |
2 |
OK |
3 |
miguno Rock: Nerf paper, scissors is fine. 1366150681 |
4 |
BlizzardCS Works as intended. Terran is IMBA. 1366154481 |
5 |
DarkTemplar From the shadows I come! 1366154681 |
6 |
VoidRay Prismatic core online! 1366160000 |
7 |
Time taken: 0.495 seconds, Fetched: 4 row(s) |
當然,我們也可以將avro.schema.literal中的
01 |
{ |
02 |
"type": "record", |
03 |
"name": "Tweet", |
04 |
"namespace": "com.miguno.avro", |
05 |
"fields": [ |
06 |
{ |
07 |
"name": "username", |
08 |
"type": "string" |
09 |
}, |
10 |
{ |
11 |
"name": "tweet", |
12 |
"type": "string" |
13 |
}, |
14 |
{ |
15 |
"name": "timestamp", |
16 |
"type": "long" |
17 |
} |
18 |
] |
19 |
} |
存放在一個文件中,比如:twitter.avsc,然后上面的建表語句就可以修改為:
01 |
CREATE EXTERNAL TABLE tweets |
02 |
COMMENT "A table backed by Avro data with the Avro schema stored in HDFS" |
03 |
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' |
04 |
STORED AS |
05 |
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' |
06 |
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' |
07 |
LOCATION '/user/wyp/examples/input/' |
08 |
TBLPROPERTIES ( |
09 |
'avro.schema.url'='hdfs:///user/wyp/examples/schema/twitter.avsc' |
10 |
); |
效果和上面的一樣。本博客文章除特別聲明,全部都是原創!
尊重原創,轉載請注明: 轉載自過往記憶(http://www.iteblog.com/)
本文鏈接地址: 《在Hive中使用Avro》(http://www.iteblog.com/archives/1007)
轉自http://www.iteblog.com/archives/1007
