轉自:http://www.aboutyun.com/thread-7327-1-1.html
1、Hive不支持等值連接
SQL中對兩表內聯可以寫成:select * from dual a,dual b where a.key = b.key;
Hive中應為:select * from dual a join dual b on a.key = b.key;
而不是傳統的格式:SELECT t1.a1 as c1, t2.b1 as c2FROM t1, t2 WHERE t1.a2 = t2.b2
2、分號字符
分號是SQL語句結束標記,在HiveQL中也是,但是在HiveQL中,對分號的識別沒有那么智慧,例如:select concat(key,concat(';',key)) from dual;
但HiveQL在解析語句時提示:FAILED: Parse Error: line 0:-1 mismatched input '<EOF>' expecting ) in function specification,解決的辦法是,使用分號的八進制的ASCII碼進行轉義,那么上述語句應寫成:select concat(key,concat('\073',key)) from dual;
3、IS [NOT] NULL
SQL中null代表空值, 值得警惕的是, 在HiveQL中String類型的字段若是空(empty)字符串, 即長度為0, 那么對它進行IS NULL的判斷結果是False.
4、Hive不支持將數據插入現有的表或分區中,
僅支持覆蓋重寫整個表,示例如下:
INSERT OVERWRITE TABLE t1 SELECT * FROM t2;
5、hive不支持INSERT INTO 表 Values(), UPDATE, DELETE操作
盡量避免使用很復雜的鎖機制來讀寫數據:INSERT INTO就是在表或分區中追加數據。
6、hive支持嵌入mapreduce程序,來處理復雜的邏輯
1 FROM ( 2 MAP doctext USING 'python wc_mapper.py' AS (word, cnt) 3 FROM docs 4 CLUSTER BY word 5 ) a 6 REDUCE word, cnt USING 'python wc_reduce.py';
doctext: 是輸入,word, cnt: 是map程序的輸出,CLUSTER BY: 將wordhash后,又作為reduce程序的輸入,並且map程序、reduce程序可以單獨使用,如:
1 FROM ( 2 FROM session_table 3 SELECT sessionid, tstamp, data 4 DISTRIBUTE BY sessionid SORT BY tstamp 5 ) a 6 REDUCE sessionid, tstamp, data USING 'session_reducer.sh';
DISTRIBUTE BY: 用於給reduce程序分配行數據。
7、hive支持將轉換后的數據直接寫入不同的表,還能寫入分區、hdfs和本地目錄
這樣能免除多次掃描輸入表的開銷。
FROM t1 INSERT OVERWRITE TABLE t2 SELECT t3.c2, count(1) FROM t3 WHERE t3.c1 <= 20 GROUP BY t3.c2 INSERT OVERWRITE DIRECTORY '/output_dir' SELECT t3.c2, avg(t3.c1) FROM t3 WHERE t3.c1 > 20 AND t3.c1 <= 30 GROUP BY t3.c2 INSERT OVERWRITE LOCAL DIRECTORY '/home/dir' SELECT t3.c2, sum(t3.c1) FROM t3 WHERE t3.c1 > 30 GROUP BY t3.c2;
實際實例:
一、創建一個表
CREATE TABLE u_data (
userid INT,
movieid INT,
rating INT,
unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '/t'
STORED AS TEXTFILE;
下載示例數據文件,並解壓縮
wget http://www.grouplens.org/system/files/ml-data.tar__0.gz
tar xvzf ml-data.tar__0.gz
二、加載數據到表中:
LOAD DATA LOCAL INPATH 'ml-data/u.data' OVERWRITE INTO TABLE u_data;
三、統計數據總量:
SELECT COUNT(1) FROM u_data;
四、現在做一些復雜的數據分析:
創建一個 weekday_mapper.py: 文件,作為數據按周進行分割
import sys
import datetime
for line in sys.stdin:
line = line.strip()
userid, movieid, rating, unixtime = line.split('/t')
五、生成數據的周信息
weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
print '/t'.join([userid, movieid, rating, str(weekday)])
六、使用映射腳本
//創建表,按分割符分割行中的字段值
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '/t';
//將python文件加載到系統
add FILE weekday_mapper.py;
七、將數據按周進行分割
INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING 'python weekday_mapper.py'
AS (userid, movieid, rating, weekday)
FROM u_data;
SELECT weekday, COUNT(1)
FROM u_data_new
GROUP BY weekday;
八、處理Apache Weblog 數據
將WEB日志先用正則表達式進行組合,再按需要的條件進行組合輸入到表中
add jar ../build/contrib/hive_contrib.jar;
CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|//[[^//]]*//]) ([^ /"]*|/"[^/"]*/") (-|[0-9]*) (-|[0-9]*)(?: ([^ /"]*|/"[^/"]*/") ([^ /"]*|/"[^/"]*/"))?",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE