常用HiveQL總結


最近在用Hive做多維數據分析,總結一些常用HiveQL命令。

1. 建表

以純文本數據建表:

create table `dmp.dim_con_adx_id_name` (
	`adx_id` string comment 'ADX ID'
	, `adx_name` string comment 'ADX名稱'
	, `update_dt` string comment '更新時間(天粒度)'
)
comment 'ADX的ID與名稱映射表'
row format delimited 
fields terminated by ','
stored as textfile
;

若未指定為外部表(external table),則默認為托管表(managed table)。二者的區別在於load與drop操作:托管表用load data inpath加載數據(路徑可為本地目錄,也可是HDFS目錄),該操作會將該文件放在HDFS目錄:/user/hive/warehouse/ 下;而外部表的數據是在location中指定,一般配合partition描述數據的生成信息;drop托管表時會將元數據與/user/hive/warehouse/下的數據一起刪掉,而drop外部表時只會刪除元數據。將本地文件加載到托管表:

load data local inpath 'adx.csv' overwrite into table dmp.dim_con_adx_id_name;

以orc file數據建外部表表:

create external table `dmp.dwd_evt_ad_user_action_di` (
    `uid` string comment '用戶ID'
    , `adx_name` string comment 'ADX名稱'
    , `media_name` string comment '媒體名稱'
    , `is_exposure` string comment '是否曝光'
    , `is_click` string comment '是否點擊'
)
comment '廣告用戶點擊天表'
partitioned by (dt string comment '天分區')
stored as orc
location '/<hdfs path>'
;

2. Partition

增加partition並指定location:

alter table dmp.dwd_evt_ad_user_action_di add if not exists partition (dt='20160520') location '20160520';

重新設置partition的location:

-- must be an absolute path
alter table dmp.dwd_evt_ad_user_action_di partition (dt='20160520') set location '<hdfs path>';  

刪除partition

alter table dmp.dwd_evt_ad_user_action_di drop if exists partition (dt='20160520') ignore protection;

查看所有的paritition,以及查看某一partition的詳細信息:

show partitions dwd_evt_ad_user_action_di;

describe formatted dwd_evt_ad_user_action_di partition (dt='20160520');

3. UDF

Hive的UDF非常豐富,基本能滿足大部分的需求。

正則匹配獲取相應字符串:

regexp_extract(dvc_model, '(.*)_(.*)', 2) as imei

復雜數據類型map、struct、指定schema的struct、array、union的構造如下:

map(key1, value1, key2, value2, ...)
struct(val1, val2, val3, ...)
named_struct(name1, val1, name2, val2, ...)
array(val1, val2, ...)
create_union(tag, val1, val2, ...)

獲取復雜數據類型的某列值:

array: A[n]
map: M[key]
struct: S.x

條件判斷case when,比如,在left join中指定默認值:

select uid
	, media
	, case 
		when b.tags is NULL then array(named_struct('tag','EMPTY', 'label','EMPTY')) 		else b.tags
	end as tags
from (
	select uid
    from dwd_evt_ad_user_action_di
    where dt = '{biz_dt}'
    	and is_exposure = '1'
) a
left outer join dwb_par_multi_user_tags_dd b 
on a.uid = b.uid;

4. UDTF

UDTF主要用來對復雜數據類型進行平鋪操作,比如,explode平鋪array與map,inline平鋪array<struct>;這種內置的UDTF要與lateral view配合使用:

select myCol1, col2 FROM baseTable
lateral view explode(col1) myTable1 AS myCol1;

select uid
	, tag
	, label
from dwb_par_multi_user_tags_dd
lateral view inline(tags) tag_tb;
-- tags: array<struct<tag:string,label:string>>

5. 多維分析

Hive 提供grouping set、rollup、cube關鍵字進行多維數據分析,可以解決自定義的維度組合、上鑽維度(\(n+1\)種)組合、所有的維度組合(\(2^n\)種)的需求。比如:

SELECT a, b, SUM( c ) 
FROM tab1 
GROUP BY a, b GROUPING SETS ( (a, b), a, b, ( ) )

-- equivalent aggregate query with group by
SELECT a, b, SUM( c ) FROM tab1 GROUP BY a, b
UNION
SELECT a, null, SUM( c ) FROM tab1 GROUP BY a, null
UNION
SELECT null, b, SUM( c ) FROM tab1 GROUP BY null, b
UNION
SELECT null, null, SUM( c ) FROM tab1


GROUP BY a, b, c, WITH ROLLUP 
-- is equivalent to 
GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (a), ( ))


GROUP BY a, b, c WITH CUBE 
-- is equivalent to 
GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (b, c), (a, c), (a), (b), (c), ( ))

此外,Hive還提供了GROUPING__ID函數對每一組合的維度進行編號,以區分該統計屬於哪一維度組合,比如:

select adx_name, media_name, grouping__id, count(*) as pv
from dwd_evt_ad_user_action_di
group by adx_name, media_name with rollup;

以指定分隔符保存結果到本地目錄:

explain
INSERT OVERWRITE LOCAL DIRECTORY '/home/<path>/<to>' 
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY '\t' 
select media_name, count(distinct uid) as uv
from dwd_evt_ad_user_action_di 
where day_time = '2016-05-20' 
	and is_exposure = '1'
group by media_name;


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM