hive原生和復合類型的數據加載和使用


原生類型

原生類型包括TINYINT,SMALLINT,INT,BIGINT,BOOLEAN,FLOAT,DOUBLE,STRING,BINARY (Hive 0.8.0以上才可用),TIMESTAMP (Hive 0.8.0以上才可用),這些數據加載很容易,只要設置好列分隔符,按照列分隔符輸出到文件就可以了。

假設有這么一張用戶登陸表

CREATE TABLE login (
  uid  BIGINT,
  ip  STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;

這表示登陸表ip字段和uid字段以分隔符','隔開。

輸出hive表對應的數據

# printf "%s,%s\n" 3105007001 192.168.1.1 >> login.txt
# printf "%s,%s\n" 3105007002 192.168.1.2 >> login.txt

login.txt的內容:

# cat login.txt                                                                                                                        
3105007001,192.168.1.1
3105007002,192.168.1.2

加載數據到hive表

LOAD DATA LOCAL INPATH '/home/hadoop/login.txt' OVERWRITE INTO TABLE login PARTITION (dt='20130101'); 

查看數據

select uid,ip from login where dt='20130101';
3105007001    192.168.1.1
3105007002    192.168.1.2

 

array

假設登陸表是

CREATE TABLE login_array (
  ip  STRING,
  uid  array<BIGINT>
)
PARTITIONED BY (dt STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '|'
STORED AS TEXTFILE;

這表示登陸表每個ip有多個用戶登陸,ip和uid字段之間使用','隔開,而uid數組之間的元素以'|'隔開。

 

輸出hive表對應的數據

# printf "%s,%s|%s|%s\n" 192.168.1.1 3105007010 3105007011 3105007012 >> login_array.txt
# printf "%s,%s|%s|%s\n" 192.168.1.2 3105007020 3105007021 3105007022 >> login_array.txt

login_array.txt的內容:

cat login_array.txt                                                                                                                    
192.168.1.1,3105007010|3105007011|3105007012
192.168.1.2,3105007020|3105007021|3105007022

 

加載數據到hive表

LOAD DATA LOCAL INPATH '/home/hadoop/login_array.txt' OVERWRITE INTO TABLE login_array PARTITION (dt='20130101'); 

 

查看數據

select ip,uid from login_array where dt='20130101';
192.168.1.1    [3105007010,3105007011,3105007012]
192.168.1.2    [3105007020,3105007021,3105007022]

使用數組

select ip,uid[0] from login_array where dt='20130101'; --使用下標訪問數組

select ip,size(uid) from login_array where dt='20130101'; #查看數組長度

select ip from login_array where dt='20130101'  where array_contains(uid,'3105007011');#數組查找

更多操作參考 https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-CollectionFunctions

 

map

假設登陸表是

CREATE TABLE login_map (
  ip  STRING,
  uid  STRING,
  gameinfo map<string,bigint>
)
PARTITIONED BY (dt STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY ':'
STORED AS TEXTFILE;

這表示登陸表每個用戶都會有游戲信息,而用戶的游戲信息有多個,key是游戲名,value是游戲的積分。map中的key和value以'':"分隔,map的元素以'|'分隔。 

 

輸出hive表對應的數據

# printf "%s,%s,%s:%s|%s:%s|%s:%s\n" 192.168.1.1  3105007010 wow 10 cf 1 qqgame 2  >> login_map.txt
# printf "%s,%s,%s:%s|%s:%s|%s:%s\n" 192.168.1.2  3105007012 wow 20 cf 21 qqgame 22  >> login_map.txt

 

login_map.txt的內容:

# cat login_map.txt
192.168.1.1,3105007010,wow:10|cf:1|qqgame:2
192.168.1.2,3105007012,wow:20|cf:21|qqgame:22

 

 

加載數據到hive表

LOAD DATA LOCAL INPATH '/home/hadoop/login_map.txt' OVERWRITE INTO TABLE login_map PARTITION (dt='20130101'); 

 

查看數據

select ip,uid,gameinfo from login_map where dt='20130101';
192.168.1.1    3105007010    {"wow":10,"cf":1,"qqgame":2}
192.168.1.2    3105007012    {"wow":20,"cf":21,"qqgame":22}

 

使用map

select ip,uid,gameinfo['wow'] from login_map where dt='20130101'; --使用下標訪問map

select ip,uid,size(gameinfo) from login_map where dt='20130101'; #查看map長度

select ip,uid from login_map where dt='20130101'  where array_contains(map_keys(gameinfo),'wow');#查看map的key,找出有玩wow游戲的記錄

更多操作參考 https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-CollectionFunctions

 

struct

假設登陸表是

CREATE TABLE login_struct (
  ip  STRING,
  user  struct<uid:bigint,name:string>
)
PARTITIONED BY (dt STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY ':'
STORED AS TEXTFILE;

user是一個struct,分別包含用戶uid和用戶名。

 

輸出hive表對應的數據

printf "%s,%s|%s|\n" 192.168.1.1  3105007010 blue  >> login_struct.txt
printf "%s,%s|%s|\n" 192.168.1.2  3105007012 ggjucheng  >> login_struct.txt

 

 

login_struct.txt的內容:

# cat login_struct.txt
192.168.1.1,3105007010,wow:10|cf:1|qqgame:2
192.168.1.2,3105007012,wow:20|cf:21|qqgame:22

 

 

加載數據到hive表

LOAD DATA LOCAL INPATH '/home/hadoop/login_struct.txt' OVERWRITE INTO TABLE login_struct PARTITION (dt='20130101'); 

 

查看數據

select ip,user from login_struct where dt='20130101';
192.168.1.1    {"uid":3105007010,"name":"blue"}
192.168.1.2    {"uid":3105007012,"name":"ggjucheng"}

 

使用struct

select ip,user.uid,user.name from login_map where dt='20130101'; 

 

union

用的比較少,暫時不講

 

嵌套復合類型

之前講的array,map,struct這幾種復合類型,里面的元素都是原生類型,如果元素是復合類型,那該怎么加載數據呢。

假設登陸表是

CREATE TABLE login_game_complex (
  ip STRING,
  uid STRING,
  gameinfo map<bigint,struct<name:string,score:bigint,level:string>> ) 
PARTITIONED BY (dt STRING) 
ROW FORMAT DELIMITED 
STORED AS TEXTFILE;

這表示登陸表每個用戶都會有游戲信息,而用戶的游戲信息有多個,key是游戲id,value是一個struct,包含游戲的名字,積分,等級。

這種復雜類型的入庫格式很麻煩,而且復合嵌套層次很多時,要生成的正確的格式也比較復雜,很容易出錯。這里稍微提下,在嵌套層次多的情況下,分隔符會會隨着復合類型嵌套層次的遞增,分隔符默認會以\0,\1,\2....變化。

這里不介紹從shell下生成文件load data入庫,感興趣的同學,可以看看hive的源代碼的org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe的serialize方法。

這里介紹使用另一種數據操作方式:insert,先把一個簡單的表的數據,加載load到hive,再使用insert插入數據到一個嵌套復雜類型的表。

 

創建簡單的表

CREATE TABLE login_game_simple (
  ip STRING,
  uid STRING,
  gameid bigint,
  gamename string,
  gamescore bigint,
  gamelevel string 
) 
PARTITIONED BY (dt STRING) 
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;

生成login_game_simple.txt的內容:

192.168.1.0,3105007010,1,wow,100,v1
192.168.1.0,3105007010,2,cf,100,v2
192.168.1.0,3105007010,3,qqgame,100,v3
192.168.1.2,3105007011,1,wow,101,v1
192.168.1.2,3105007011,3,qqgame,101,v3
192.168.1.2,3105007012,1,wow,102,v1
192.168.1.2,3105007012,2,cf,102,v2
192.168.1.2,3105007012,3,qqgame,102,v3

load data到hive后,再生成復雜的gameinfo map結構,插入到表login_game_complex

INSERT OVERWRITE TABLE login_game_complex PARTITION (dt='20130101')  
select ip,uid,map(gameid, named_struct('name',gamename,'score',gamescore,'level',gamelevel) ) FROM login_game_simple  where dt='20130101' ;

 

查詢數據

select ip,uid,gameinfo from login_game_complex where dt='20130101';
192.168.1.0    3105007010    {1:{"name":"wow","score":100,"level":"v1"}}
192.168.1.0    3105007010    {2:{"name":"cf","score":100,"level":"v2"}}
192.168.1.0    3105007010    {3:{"name":"qqgame","score":100,"level":"v3"}}
192.168.1.2    3105007011    {1:{"name":"wow","score":101,"level":"v1"}}
192.168.1.2    3105007011    {3:{"name":"qqgame","score":101,"level":"v3"}}
192.168.1.2    3105007012    {1:{"name":"wow","score":102,"level":"v1"}}
192.168.1.2    3105007012    {2:{"name":"cf","score":102,"level":"v2"}}
192.168.1.2    3105007012    {3:{"name":"qqgame","score":102,"level":"v3"}}

這里只是演示了嵌套復雜類型的入庫方式,所以這里只是例子。真正要完美入庫,還是需要寫一個自定義函數,根據ip和uid做group by,然后把gameinfo合並起來。hive沒有這樣的自定義函數,篇幅着想,不引進復雜的自定義函數編寫。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM