一、分區表創建與說明
必須在表定義時創建partition
a、單分區建表語句:create table day_table (id int, content string) partitioned by (dt string);單分區表,按天分區,在表結構中存在id,content,dt三列。
以dt為文件夾區分
b、 雙分區建表語句:create table day_hour_table (id int, content string) partitioned by (dt string, hour string);雙分區表,按天和小時分區,在表結構中新增加了dt和hour兩列。
先以dt為文件夾,再以hour子文件夾區分
添加分區表語法(表已創建,在此基礎上添加分區):ALTER TABLE table_name ADD
partition_spec [ LOCATION 'location1' ]
partition_spec [ LOCATION 'location2' ] ...
ALTER TABLE day_table ADD
PARTITION (dt='2008-08-08', hour='08')
location '/path/pv1.txt'
刪除分區語法:ALTER TABLE table_name DROP
partition_spec, partition_spec,...
用戶可以用 ALTER TABLE DROP PARTITION 來刪除分區。分區的元數據和數據將被一並刪除。例:
ALTER TABLE day_hour_table DROP PARTITION (dt='2008-08-08', hour='09');數據加載進分區表中語法:
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]例:
LOAD DATA INPATH '/user/pv.txt' INTO TABLE day_hour_table PARTITION(dt='2008-08- 08', hour='08'); LOAD DATA local INPATH '/user/hua/*' INTO TABLE day_hour partition(dt='2010-07- 07');當數據被加載至表中時,不會對數據進行任何轉換。Load操作只是將數據復制至Hive表對應的位置。數據加載時在表下自動創建一個目錄基於分區的查詢的語句 :SELECT day_table.* FROM day_table WHERE day_table.dt>= '2008-08-08';
查看分區語句:
hive> show partitions day_hour_table; OK dt=2008-08-08/hour=08 dt=2008-08-08/hour=09 dt=2008-08-09/hour=09二、靜態與動態分區表
partition是hive提供的一種機制:用戶通過指定一個或多個partition key,決定數據存放方式,進而優化數據的查詢
一個表可以指定多個partition key,每個partition在hive中以文件夾的形式存在。
1、靜態分區(static partition):
編輯文件:/home/work/data/test3.txt; /home/work/data/test4.txt;
$ cat /home/work/data/test3.txt
1,zxm
2,ljz
3,cds
4,mac
5,android
6,symbian
7,wp
$ cat /home/work/data/test4.txt
8,zxm
9,ljz
10,cds
11,mac
12,android
13,symbian
14,wp
建表:
hive> create table student_tmp(id INT, name STRING)
> partitioned by(academy STRING, class STRING)
> row format delimited fields terminated by ',';
OK
Time taken: 6.505 seconds
id,name是真實列,partition列academy和class是偽列
load數據:(此處直接load數據進partition,在hive 0.6之前的版本,必須先創建好partition,數據才能導入)
hive> load data local inpath '/home/work/data/test3.txt' into table student_tmp partition(academy='computer', class='034');
Copying data from file:/home/work/data/test3.txt
Copying file: file:/home/work/data/test3.txt
Loading data to table default.student_tmp partition (academy=computer, class=034)
OK
Time taken: 0.898 seconds
hive> load data local inpath '/home/work/data/test3.txt' into table student_tmp partition(academy='physics', class='034');
Copying data from file:/home/work/data/test3.txt
Copying file: file:/home/work/data/test3.txt
Loading data to table default.student_tmp partition (academy=physics, class=034)
OK
Time taken: 0.256 seconds
查看hive文件結構:
$ hadoop fs -ls /user/hive/warehouse/student_tmp/
Found 2 items
drwxr-xr-x - work supergroup 0 2012-07-30 18:47 /user/hive/warehouse/student_tmp/academy=computer
drwxr-xr-x - work supergroup 0 2012-07-30 19:00 /user/hive/warehouse/student_tmp/academy=physics
$ hadoop fs -ls /user/hive/warehouse/student_tmp/academy=computer
Found 1 items
drwxr-xr-x - work supergroup 0 2012-07-30 18:47 /user/hive/warehouse/student_tmp/academy=computer/class=034
查詢數據:
hive> select * from student_tmp where academy='physics';
OK
1 zxm physics 034
2 ljz physics 034
3 cds physics 034
4 mac physics 034
5 android physics 034
6 symbian physics 034
7 wp physics 034
Time taken: 0.139 seconds
以上是static partition的示例,static partition即由用戶指定數據所在的partition,在load數據時,指定partition(academy='computer', class='034');
static partition常適用於使用處理時間作為partition key的例子。
但是,我們也常常會遇到需要向分區表中插入大量數據,並且插入前不清楚數據歸宿的partition,此時,我們需要dynamic partition。
使用動態分區需要設置hive.exec.dynamic.partition參數值為true。
可以設置部分列為dynamic partition列,例如:partition(academy='computer', class);
也可以設置所有列為dynamic partition列,例如partition(academy, class);
設置所有列為dynamic partition列時,需要設置hive.exec.dynamic.partition.mode=nonstrict
需要注意的是,主分區為dynamic partition列,而副分區為static partition列是不允許的,例如partition(academy, class=‘034’);是不允許的
2、動態分區(dynamic partition):
建表
hive> create table student(id INT, name STRING)
> partitioned by(academy STRING, class STRING)
> row format delimited fields terminated by ',';
OK
Time taken: 0.393 seconds
設置參數
hive> set hive.exec.dynamic.partition.mode=nonstrict;
hive> set hive.exec.dynamic.partition=true;
導入數據:
hive> insert overwrite table student partition(academy, class)
> select id,name,academy,class from student_tmp
> where class='034';
Total MapReduce jobs = 2
.........
OK
Time taken: 29.616 seconds
查詢數據:
hive> select * from student where academy='physics';
OK
1 zxm physics 034
2 ljz physics 034
3 cds physics 034
4 mac physics 034
5 android physics 034
6 symbian physics 034
7 wp physics 034
Time taken: 0.165 seconds
查看文件:
$ hadoop fs -ls /user/hive/warehouse/student/
Found 2 items
drwxr-xr-x - work supergroup 0 2012-07-30 19:22 /user/hive/warehouse/student/academy=computer
drwxr-xr-x - work supergroup 0 2012-07-30 19:22 /user/hive/warehouse/student/academy=physics
3、總結:
hive partition是通過將數據拆分成不同的partition放入不同的文件,從而減少查詢操作時數據處理規模的手段。
例如,Hive Select查詢中,如果沒有建partition,則會掃描整個表內容,這樣計算量巨大。如果我們在相應維度做了partition,則處理數據規模可能會大大減少。
|
4、附partition相關參數:
hive.exec.dynamic.partition(缺省false): 設置為true允許使用dynamic partition
hive.exec.dynamic.partition.mode(缺省strick):設置dynamic partition模式(nostrict允許所有partition列都為dynamic partition,strict不允許)
hive.exec.max.dynamic.partitions.pernode (缺省100):每一個mapreduce job允許創建的分區的最大數量,如果超過了這個數量就會報錯
hive.exec.max.dynamic.partitions (缺省1000):一個dml語句允許創建的所有分區的最大數量
hive.exec.max.created.files (缺省100000):所有的mapreduce job允許創建的文件的最大數量