該文件轉自 http://blog.csdn.net/hguisu/article/details/7256833
1 DDL
1.1 Create/Drop/Alter/Use Database
1.1.1 Create Database
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
[COMMENT database_comment]
[LOCATION hdfs_path]
[WITH DBPROPERTIES (property_name=property_value, ...)];
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
注:DATABASE與SCHEMA用途相同
1.1.2 Drop Database
DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
- 1
- 1
注:默認RESTRICT,使用CASCADE可刪除含表的數據庫。
1.1.3 Alter Database
ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...); -- (Note: SCHEMA added in Hive 0.14.0) ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role; -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)
- 1
- 2
- 3
- 1
- 2
- 3
1.1.4 Use Database
USE database_name; USE DEFAULT; -- 獲取當前數據庫 SELECT current_database()
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
1.2 Create/Drop/Truncate Table
1.2.1 Create Table
1.2.1.1 語法
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)
[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];
data_type
: primitive_type
| array_type
| map_type
| struct_type
| union_type -- (Note: Available in Hive 0.7.0 and later)
primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
| STRING
| BINARY -- (Note: Available in Hive 0.8.0 and later)
| TIMESTAMP -- (Note: Available in Hive 0.8.0 and later)
| DECIMAL -- (Note: Available in Hive 0.11.0 and later)
| DECIMAL(precision, scale) -- (Note: Available in Hive 0.13.0 and later)
| DATE -- (Note: Available in Hive 0.12.0 and later)
| VARCHAR -- (Note: Available in Hive 0.12.0 and later)
| CHAR -- (Note: Available in Hive 0.13.0 and later)
array_type
: ARRAY
map_type
: MAP
1.2.1.2 Row Format, Storage Format, and SerDe
可以使用自定義的SerDe或者Hive自帶的SerDe。如果沒有指定ROW FORMAT或指定了ROW FORMAT DELIMITED時,會使用自帶的SerDe。
‘hive.default.fileformat’設置默認存儲格式。
參考 [Hive中的InputFormat、OutputFormat與SerDe] (https://www.coder4.com/archives/4031)
1.2.1.3 Partitioned Table
通過PARTITIONED BY創建。可以指定一個或多個分區列,用CLUSTERED BY columns分桶,通過SORT BY排序桶中的數據。
-- 示例 id int, date date, name varchar create table table_name ( id int, dtDontQuery string, name string ) partitioned by (date string)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
1.2.1.4 External Tables
LOCATION指定數據所在位置。刪除外部表時,表中數據不會被刪除。
CREATE EXTERNAL TABLE page_view(viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT 'IP Address of the User', country STRING COMMENT 'country of origination') COMMENT 'This is the staging page view table' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054' STORED AS TEXTFILE LOCATION '<hdfs_location>';
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
1.2.1.5 Create Table As Select (CTAS)
限制: 1. 目標表不能是分區表; 2. 目標表不能是外部表; 3. 目標表不能是list bucketing表。
示例
CREATE TABLE new_key_value_store ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe" STORED AS RCFile AS SELECT (key % 1024) new_key, concat(key, value) key_value_pair FROM key_value_store SORT BY new_key, key_value_pair;
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
1.2.1.6 Create Table Like
復制存在的表的表定義,創建新表(不含數據)。
CREATE TABLE empty_key_value_store LIKE key_value_store;
- 1
- 2
- 1
- 2
1.2.1.7 Bucketed Sorted Tables
對於每一個表(table)或者分區, Hive可以進一步組織成桶,也就是說桶是更為細粒度的數據范圍划分。使用CLUSTERED BY 子句來指定划分桶所用的列和要划分的桶的個數。
CREATE TABLE page_view(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
COLLECTION ITEMS TERMINATED BY '\002'
MAP KEYS TERMINATED BY '\003'
STORED AS SEQUENCEFILE;
參考: [Hive 基礎之:分區、桶、Sort Merge Bucket Join] (http://blog.csdn.net/wisgood/article/details/17186107)
1.2.1.8 Skewed Tables
該特性為了優化表中一列或幾列有數據傾斜的值。
CREATE TABLE list_bucket_single (key STRING, value STRING) SKEWED BY (key) ON (1,5,6) [STORED AS DIRECTORIES]; CREATE TABLE list_bucket_multiple (col1 STRING, col2 int, col3 STRING) SKEWED BY (col1, col2) ON (('s1',1), ('s3',3), ('s13',13), ('s78',78)) [STORED AS DIRECTORIES];
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
參考: [HIVE 數據傾斜調優總結] (http://www.cnblogs.com/end/archive/2012/06/19/2554582.html)
1.2.1.9 Temporary Tables
1. 不支持分區; 2. 不支持索引。
示例
CREATE TEMPORARY TABLE temp_table (key STRING, value STRING);
- 1
- 1
1.2.2 Drop Table
DROP TABLE [IF EXISTS] table_name [PURGE];
- 1
- 1
注:指定PURGE后,數據不會放到回收箱,會直接刪除。
1.2.3 Truncate Table
TRUNCATE TABLE table_name [PARTITION partition_spec];
partition_spec:
: (partition_column = partition_col_value, partition_column = partition_col_value, ...)
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
注:指定PARTITION時,只刪除PARTITION的數據,否則刪除所有數據。
1.3 Alter Table/Partition/Column
1.3.1 Alter Table
-- 重命名表名
ALTER TABLE table_name RENAME TO new_table_name;
-- 修改表屬性
ALTER TABLE table_name SET TBLPROPERTIES table_properties;
table_properties:
: (property_name = property_value, property_name = property_value, ... ) -- 修改表注釋 ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comment); -- 增加SerDe屬性 ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties]; ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties; serde_properties: : (property_name = property_value, property_name = property_value, ... ) 示例 ALTER TABLE table_name SET SERDEPROPERTIES ('field.delim' = ','); -- 修改存儲屬性 ALTER TABLE table_name CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name, ...)] INTO num_buckets BUCKETS; -- Alter Table Skewed ALTER TABLE table_name SKEWED BY (col_name1, col_name2, ...) ON ([(col_name1_value, col_name2_value, ...) [, (col_name1_value, col_name2_value), ...] [STORED AS DIRECTORIES]; -- Alter Table Not Skewed ALTER TABLE table_name NOT SKEWED; -- Alter Table Not Stored as Directories ALTER TABLE table_name NOT STORED AS DIRECTORIES; -- Alter Table Set Skewed Location ALTER TABLE table_name SET SKEWED LOCATION (col_name1="location1" [, col_name2="location2", ...] );
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
1.3.2 Alter Partition
增加分區
ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec
[LOCATION 'location1'] partition_spec [LOCATION 'location2'] ...; partition_spec: : (partition_column = partition_col_value, partition_column = partition_col_value, ...) 示例 ALTER TABLE page_view ADD PARTITION (dt='2008-08-08', country='us') location '/path/to/us/part080808' PARTITION (dt='2008-08-09', country='us') location '/path/to/us/part080809';
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
動態分區
重命名分區
ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;
- 1
- 1
交換分區
分區可以在表之間交換
ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec) WITH TABLE table_name_2; -- multiple partitions ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec, partition_spec2, ...) WITH TABLE table_name_2;
- 1
- 2
- 3
- 1
- 2
- 3
恢復分區 (MSCK REPAIR TABLE)
通過HDFS命令增加的分區,不會在Hive的metastore中有記錄,可以通過下面命令,自動修復。
MSCK REPAIR TABLE table_name;
- 1
- 1
刪除分區
ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...] [IGNORE PROTECTION] [PURGE]; -- (Note: PURGE available in Hive 1.2.0 and later, IGNORE PROTECTION not available 2.0.0 and later)
- 1
- 2
- 1
- 2
(Un)Archive Partition
ALTER TABLE table_name ARCHIVE PARTITION partition_spec; ALTER TABLE table_name UNARCHIVE PARTITION partition_spec;
- 1
- 2
- 1
- 2
1.3.3 Alter Either Table or Partition
修改表或分區的文件格式
ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
- 1
- 1
修改表或分區的存儲位置
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";
- 1
- 1
Alter Table/Partition Touch
ALTER TABLE table_name TOUCH [PARTITION partition_spec];
- 1
- 1
Alter Table/Partition Protections
ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE NO_DROP [CASCADE]; ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE OFFLINE;
- 1
- 2
- 3
- 1
- 2
- 3
Alter Table/Partition Compact
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])] COMPACT 'compaction_type' [WITH OVERWRITE TBLPROPERTIES ("property"="value" [, ...])];
- 1
- 2
- 3
- 1
- 2
- 3
Alter Table/Partition Concatenate 如果表或分區包含很多小的RCF文件或ORC文件,下面命令會合並小文件成大文件。
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])] CONCATENATE;
- 1
- 1
1.3.4 Alter Column
修改列名、類型、位置與注釋
ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type [COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT]; -- 默認RESTRICT,CASCADE除了修改表的metadata,也會修改所有分區的metadata。
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
增加或替換
ALTER TABLE table_name
[PARTITION partition_spec] -- (Note: Hive 0.14.0 and later) ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...) [CASCADE|RESTRICT] -- (Note: Hive 0.15.0 and later) -- REPLACE會移除所有存在的列,增加新的列。
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
Partial Partition Specification
-- 以下操作 SET hive.exec.dynamic.partition = true; ALTER TABLE foo PARTITION (ds='2008-04-08', hr) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18); -- 等價於 ALTER TABLE foo PARTITION (ds='2008-04-08', hr=11) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18); ALTER TABLE foo PARTITION (ds='2008-04-08', hr=12) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18); ...
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
支持的操作:Change column、Add column、Replace column、File Format、Serde Properties
1.4 Create/Drop/Alter View
1.4.1 Create View
語法:
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ] [COMMENT view_comment] [TBLPROPERTIES (property_name = property_value, ...)] AS SELECT ...;
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
示例:
CREATE VIEW onion_referrers(url COMMENT 'URL of Referring page') COMMENT 'Referrers to The Onion website' AS SELECT DISTINCT referrer_url FROM page_view WHERE page_url='http://www.theonion.com';
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
1.4.2 Drop View
DROP VIEW [IF EXISTS] [db_name.]view_name;
- 1
- 1
1.4.3 Alter View Properties
ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;
table_properties:
: (property_name = property_value, property_name = property_value, ...)
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
1.4.4 Alter View As Select
ALTER VIEW [db_name.]view_name AS select_statement;
- 1
- 1
1.5 Create/Drop/Alter Index
1.5.1 Create Index
CREATE INDEX index_name
ON TABLE base_table_name (col_name, ...) AS index_type [WITH DEFERRED REBUILD] [IDXPROPERTIES (property_name=property_value, ...)] [IN TABLE index_table_name] [ [ ROW FORMAT ...] STORED AS ... | STORED BY ... ] [LOCATION hdfs_path] [TBLPROPERTIES (...)] [COMMENT "index comment"];
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
參考: CREATE INDEX
1.5.2 Drop Index
DROP INDEX [IF EXISTS] index_name ON table_name;
- 1
- 1
1.5.3 Alter Index
ALTER INDEX index_name ON table_name [PARTITION partition_spec] REBUILD;
- 1
- 1
1.6 Create/Drop Macro
1.6.1 Create Temporary Macro
只在當前session有效。
CREATE TEMPORARY MACRO macro_name([col_name col_type, ...]) expression; -- 示例 CREATE TEMPORARY MACRO fixed_number() 42; CREATE TEMPORARY MACRO string_len_plus_two(x string) length(x) + 2; CREATE TEMPORARY MACRO simple_add (x int, y int) x + y;
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
1.6.2 Drop Temporary Macro
DROP TEMPORARY MACRO [IF EXISTS] macro_name;
- 1
- 1
1.7 Create/Drop/Reload Function
1.7.1 臨時函數
-- 創建 CREATE TEMPORARY FUNCTION function_name AS class_name; -- 刪除 DROP TEMPORARY FUNCTION [IF EXISTS] function_name;
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
1.7.2 永久函數
-- 創建 CREATE FUNCTION [db_name.]function_name AS class_name [USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ]; -- 刪除 DROP FUNCTION [IF EXISTS] function_name; -- 使其他session創建的函數有效 RELOAD FUNCTION;
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
1.8 Create/Drop/Grant/Revoke Roles and Privileges
參考: Create/Drop/Grant/Revoke Roles and Privileges
1.9 Show
1.9.1 Show Databases
SHOW (DATABASES|SCHEMAS) [LIKE 'identifier_with_wildcards'];
- 1
- 1
注:LIKE子句可以使用規則過濾,’*’代表任意個字符,’|’表示或,例如’emp*|*ees’
1.9.2 Show Tables/Partitions/Indexes
SHOW TABLES [IN database_name] ['identifier_with_wildcards']; SHOW PARTITIONS table_name; SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]; SHOW TABLE EXTENDED [IN|FROM database_name] LIKE 'identifier_with_wildcards' [PARTITION(partition_spec)]; -- 展示表屬性 SHOW TBLPROPERTIES table_name; SHOW TBLPROPERTIES table_name("foo"); -- 顯示建表文 SHOW CREATE TABLE ([db_name.]table_name|view_name); -- 顯示索引 SHOW [FORMATTED] (INDEX|INDEXES) ON table_with_index [(FROM|IN) db_name];
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
1.9.3 Show Columns
SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name];
- 1
- 1
1.9.4 Show Functions
-- 顯示以'a'開頭的函數 SHOW FUNCTIONS "a.*"; -- 顯示所有函數 SHOW FUNCTIONS ".*";
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
1.9.5 Show Granted Roles and Privileges
參考: Show Granted Roles and Privileges
1.9.6 Show Locks
SHOW LOCKS <table_name>; SHOW LOCKS <table_name> EXTENDED; SHOW LOCKS <table_name> PARTITION (<partition_spec>); SHOW LOCKS <table_name> PARTITION (<partition_spec>) EXTENDED; SHOW LOCKS (DATABASE|SCHEMA) database_name; -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
參考:[HIVE-6460] (https://issues.apache.org/jira/browse/HIVE-6460)
1.9.7 Show Conf
SHOW CONF <configuration_name>;
-- 返回
default value required value description
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
參考:[Hive 屬性] (https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties)
1.9.8 Show Transactions
SHOW TRANSACTIONS;
-- 返回 transaction ID transaction state user who started the transaction machine where the transaction was started
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
1.9.9 Show Compactions
SHOW COMPACTIONS;
- 1
- 1
1.10 Describe
1.10.1 Describe Database
DESCRIBE DATABASE [EXTENDED] db_name; DESCRIBE SCHEMA [EXTENDED] db_name; -- (Note: Hive 0.15.0 and later) 例如 hive> desc database w1; OK w1 hdfs://hsm01:9000/hive/warehouse/w1.db zkpk USER Time taken: 0.032 seconds, Fetched: 1 row(s)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
注:EXTENDED會展示數據庫的屬性
1.10.2 Describe Table/View/Column
-- Hive 1.x.x and 0.x.x only.
DESCRIBE [EXTENDED|FORMATTED]
table_name[.col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];
DESCRIBE [EXTENDED|FORMATTED]
[db_name.]table_name[ col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];
-- 示例
hive> desc province.province;
OK
province string from deserializer
hive> desc w1.province province;
OK
province string from deserializer
1.10.3 顯示Column統計
-- (Note: Hive 0.14.0 and later) DESCRIBE FORMATTED [db_name.]table_name column_name; -- (Note: Hive 0.14.0 to 1.x.x) DESCRIBE FORMATTED [db_name.]table_name column_name PARTITION (partition_spec);
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
1.10.4 Describe Partition
-- (Hive 1.x.x and 0.x.x only) DESCRIBE [EXTENDED|FORMATTED] table_name[.column_name] PARTITION partition_spec; DESCRIBE [EXTENDED|FORMATTED] [db_name.]table_name [column_name] PARTITION partition_spec;
- 1
- 2
- 3
- 1
- 2
- 3
1.10.5 Hive 2.0+
DESCRIBE [EXTENDED | FORMATTED] [db_name.]table_name [PARTITION partition_spec] [col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];
- 1
- 2
- 1
- 2
1.11 Abort
移除指定的事物。
ABORT TRANSACTIONS transactionID [, ...];
- 1
- 1
2 Statistics
參考: StatsDev
ANALYZE
ANALYZE TABLE [db_name.]tablename [PARTITION(partcol1[=val1], partcol2[=val2], ...)] -- (Note: Fully support qualified table name since Hive 1.2.0, see HIVE-10007.) COMPUTE STATISTICS [FOR COLUMNS] -- (Note: Hive 0.10.0 and later.) [CACHE METADATA] -- (Note: Hive 2.1.0 and later.) [NOSCAN]; -- 示例 ANALYZE TABLE Table1 COMPUTE STATISTICS; ANALYZE TABLE Table1 PARTITION(ds='2008-04-09', hr) COMPUTE STATISTICS NOSCAN; -- 查看統計 DESCRIBE EXTENDED TABLE1; DESCRIBE EXTENDED TABLE1 PARTITION(ds='2008-04-09', hr=11);
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
3 Indexes
示例:
Create/build, show, and drop index:
CREATE INDEX table01_index ON TABLE table01 (column2) AS 'COMPACT';
SHOW INDEX ON table01;
DROP INDEX table01_index ON table01;
Create then build, show formatted (with column names), and drop index:
CREATE INDEX table02_index ON TABLE table02 (column3) AS 'COMPACT' WITH DEFERRED REBUILD;
ALTER INDEX table02_index ON table2 REBUILD;
SHOW FORMATTED INDEX ON table02;
DROP INDEX table02_index ON table02;
Create bitmap index, build, show, and drop:
CREATE INDEX table03_index ON TABLE table03 (column4) AS 'BITMAP' WITH DEFERRED REBUILD;
ALTER INDEX table03_index ON table03 REBUILD;
SHOW FORMATTED INDEX ON table03;
DROP INDEX table03_index ON table03;
Create index in a new table:
CREATE INDEX table04_index ON TABLE table04 (column5) AS 'COMPACT' WITH DEFERRED REBUILD IN TABLE table04_index_table;
Create index stored as RCFile:
CREATE INDEX table05_index ON TABLE table05 (column6) AS 'COMPACT' STORED AS RCFILE;
Create index stored as text file:
CREATE INDEX table06_index ON TABLE table06 (column7) AS 'COMPACT' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE;
Create index with index properties:
CREATE INDEX table07_index ON TABLE table07 (column8) AS 'COMPACT' IDXPROPERTIES ("prop1"="value1", "prop2"="value2");
Create index with table properties:
CREATE INDEX table08_index ON TABLE table08 (column9) AS 'COMPACT' TBLPROPERTIES ("prop3"="value3", "prop4"="value4");
Drop index if exists:
DROP INDEX IF EXISTS table09_index ON table09;
Rebuild index on a partition:
ALTER INDEX table10_index ON table10 PARTITION (columnX='valueQ', columnY='valueR') REBUILD;
4 Archiving
利用Hadoop Archives可以把多個文件歸檔成為一個文件,歸檔成一個文件后還可以透明的訪問每一個文件。這樣可以減少分區中文件的數量。
啟動archiving,需要設置3個配置:
hive> set hive.archive.enabled=true; hive> set hive.archive.har.parentdir.settable=true; hive> set har.partfile.size=1099511627776;
- 1
- 2
- 3
- 1
- 2
- 3
語法:
ALTER TABLE table_name ARCHIVE|UNARCHIVE PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value, ...)
- 1
- 1
示例:
-- 執行歸檔 ALTER TABLE srcpart ARCHIVE PARTITION(ds='2008-04-08', hr='12') -- 解除歸檔 ALTER TABLE srcpart UNARCHIVE PARTITION(ds='2008-04-08', hr='12')
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
5 DML
5.1 LOAD文件到表
5.1.1 語法
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
- 1
- 1
5.1.2 示例
-- 從本地導入數據到表格並追加原表 LOAD DATA LOCAL INPATH `/tmp/pv_2013-06-08_us.txt` INTO TABLE c02 PARTITION(date='2013-06-08', country='US') -- 從hdfs導入數據到表格並覆蓋原表: LOAD DATA INPATH '/user/admin/SqlldrDat/CnClickstat/20131101/18/clickstat_gp_fatdt0/0' INTO table c02_clickstat_fatdt1 OVERWRITE PARTITION (dt='20131201');
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
5.2 從查詢Insert到Hive表
5.2.1 標准Insert
語法:
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 FROM from_statement; INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement; INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] (z,y) select_statement1 FROM from_statement;
- 1
- 2
- 3
- 1
- 2
- 3
示例:
from B insert table A select 1,‘abc’ limit 1;
- 1
- 1
5.2.2 Hive擴展 (多inserts):
語法:
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 [INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] [INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...; FROM from_statement INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 [INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] [INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
5.2.3 Hive擴展 (動態分區inserts)
5.2.3.1 語法
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement; INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
- 1
- 2
- 1
- 2
5.2.3.2 說明
默認disable。配置動態分區插入:
屬性 | 默認值 | 說明 |
---|---|---|
hive.exec.dynamic.partition | false | 是否啟動動態分區插入 |
hive.exec.dynamic.partition.mode | strict | strict模式,在子句中必須至少指定一個靜態分區 |
hive.exec.max.dynamic.partitions.pernode | 100 | 每個mapper-reducer節點,可創建動態分區的最大數量 |
hive.exec.max.dynamic.partitions | 1000 | 可創建動態分區總的最大數量 |
hive.exec.max.created.files | 100000 | MapReduce任務中,所有mapper-reducer可創建HDFS文件的最大數量 |
hive.error.on.empty.partition | false | 如果動態分區插入生成空結果集,是否拋出異常 |
5.2.3.3 示例
FROM page_view_stg pvs
INSERT OVERWRITE TABLE page_view PARTITION(dt='2008-06-08', country) SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, null, null, pvs.ip, pvs.cnt
- 1
- 2
- 3
- 1
- 2
- 3
5.3 Write數據到文件系統
5.3.1 語法
-- 標准
INSERT OVERWRITE [LOCAL] DIRECTORY directory1
[ROW FORMAT row_format] [STORED AS file_format] (Note: Only available starting with Hive 0.11.0) SELECT ... FROM ... -- Hive擴展 (multiple inserts): FROM from_statement INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1 [INSERT OVERWRITE [LOCAL] DIRECTORY directory2 select_statement2] ... row_format : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char] [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char] [NULL DEFINED AS char] (Note: Only available starting with Hive 0.13)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
5.3.2 示例
-- 導出文件到本地: INSERT OVERWRITE LOCAL DIRECTORY '/tmp/local_out' SELECT a.* FROM pokes a; -- 導出文件到HDFS: INSERT OVERWRITE DIRECTORY '/user/admin/SqlldrDat/CnClickstat/20131101/19/clickstat_gp_fatdt0/0' SELECT a.* FROM c02_clickstat_fatdt1 a WHERE dt=’20131201’; -- 一個源可以同時插入到多個目標表或目標文件,多目標insert可以用一句話來完成: FROM src INSERT OVERWRITE TABLE dest1 SELECT src.* WHERE src.key < 100 INSERT OVERWRITE TABLE dest2 SELECT src.key, src.value WHERE src.key >= 100 and src.key < 200 INSERT OVERWRITE TABLE dest3 PARTITION(ds='2013-04-08', hr='12') SELECT src.key WHERE src.key >= 200 and src.key < 300 INSERT OVERWRITE LOCAL DIRECTORY '/tmp/dest4.out' SELECT src.value WHERE src.key >= 300;
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
5.4 Insert值到表
5.4.1 語法
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...] Where values_row is: ( value [, value ...] ) where a value is either null or any valid SQL literal
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
5.4.2 示例
INSERT INTO TABLE students VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
- 1
- 2
- 1
- 2
5.5 Update
語法:
UPDATE tablename SET column = value [, column = value ...] [WHERE expression]
- 1
- 1
5.6 Delete
DELETE FROM tablename [WHERE expression]
- 1
- 1
6 import/export
6.1 語法
-- EXPORT
EXPORT TABLE tablename [PARTITION (part_column="value"[, ...])] TO 'export_target_path' [ FOR replication('eventid') ] -- IMPORT IMPORT [[EXTERNAL] TABLE new_or_original_tablename [PARTITION (part_column="value"[, ...])]] FROM 'source_path' [LOCATION 'import_target_path']
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
6.2 示例
簡單導出導入示例:
export table department to 'hdfs_exports_location/department';
import from 'hdfs_exports_location/department';
使用import重命名表:
export table department to 'hdfs_exports_location/department';
import table imported_dept from 'hdfs_exports_location/department';
導出導入分區:
export table employee partition (emp_country="in", emp_state="ka") to 'hdfs_exports_location/employee';
import from 'hdfs_exports_location/employee';
導出表,導入分區:
export table employee to 'hdfs_exports_location/employee';
import table employee partition (emp_country="us", emp_state="tn") from 'hdfs_exports_location/employee';
指定導入的路徑:
export table department to 'hdfs_exports_location/department';
import table department from 'hdfs_exports_location/department'
location 'import_target_location/department';
導入到外部表:
export table department to 'hdfs_exports_location/department';
import external table department from 'hdfs_exports_location/department';
7 explain plan
7.1 語法
EXPLAIN [EXTENDED|DEPENDENCY|AUTHORIZATION] query
- 1
- 1
示例:
hive> > > explain > select * from student; OK STAGE DEPENDENCIES: Stage-0 is a root stage STAGE PLANS: Stage: Stage-0 Fetch Operator limit: -1 Processor Tree: TableScan alias: student Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: NONE Select Operator expressions: classno (type: string), stuno (type: string), score (type: float) outputColumnNames: _col0, _col1, _col2 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: NONE ListSink Time taken: 0.106 seconds, Fetched: 17 row(s)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
7.2 DEPENDENCY
更多的關於inputs的信息會被展示。
7.3 AUTHORIZATION
權限相關信息會被展示
hive.explain.user=true (default is false) 時,用戶會看到更多的執行信息。
8 Select
8.1 語法
[WITH CommonTableExpression (, CommonTableExpression)*] (Note: Only available starting with Hive 0.13.0)
SELECT [ALL | DISTINCT] select_expr, select_expr, ... FROM table_reference [WHERE where_condition] [GROUP BY col_list] [ORDER BY col_list] [CLUSTER BY col_list | [DISTRIBUTE BY col_list] [SORT BY col_list] ] [LIMIT number]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
8.2 REGEX列
hive.support.quoted.identifiers配置項(默認column)設置為none時,可以使用正則。正則規則和Java相同。
-- 返回除ds和hr的所有列 SELECT `(ds|hr)?+.+` FROM sales
- 1
- 2
- 1
- 2
8.3 Group by
8.3.1 語法
groupByClause: GROUP BY groupByExpression (, groupByExpression)* groupByExpression: expression groupByQuery: SELECT expression (, expression)* FROM src groupByClause?
- 1
- 2
- 3
- 1
- 2
- 3
示例(Multi-Group-By Inserts)
FROM pv_users
INSERT OVERWRITE TABLE pv_gender_sum SELECT pv_users.gender, count(DISTINCT pv_users.userid) GROUP BY pv_users.gender INSERT OVERWRITE DIRECTORY '/user/facebook/tmp/pv_age_sum' SELECT pv_users.age, count(DISTINCT pv_users.userid) GROUP BY pv_users.age;
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
8.3.2 增強聚集
8.3.2.1 GROUPING sets
GROUPING SET相當於多個GROUP BY查詢語句,通過UNION連接。
示例
GROUPING SETS例 | 等價的GROUP BY例 |
---|---|
SELECT a, b, SUM(c) FROM tab1 GROUP BY a, b GROUPING SETS ((a, b), a, b, ()) | SELECT a, b, SUM(c) FROM tab1 GROUP BY a, b UNION SELECT a, null, SUM(c) FROM tab1 GROUP BY a, null UNION SELECT null, b, SUM(c) FROM tab1 GROUP BY null, b UNION SELECT null, null, SUM(c) FROM tab1 |
8.3.2.2 GROUPING__ID函數
表示結果屬於哪一個分組集合.
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM lxw1234 GROUP BY month,day GROUPING SETS (month, day, (month, day)) ORDER BY GROUPING__ID; month day uv GROUPING__ID ------------------------------------------------ 2015-03 NULL 5 1 2015-04 NULL 6 1 NULL 2015-03-10 4 2 NULL 2015-03-12 1 2 NULL 2015-04-12 2 2 NULL 2015-04-13 3 2 NULL 2015-04-15 2 2 NULL 2015-04-16 2 2 2015-03 2015-03-10 4 3 2015-03 2015-03-12 1 3 2015-04 2015-04-12 2 3 2015-04 2015-04-13 3 3 2015-04 2015-04-15 2 3 2015-04 2015-04-16 2 3
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
8.3.2.3 CUBE
完成對字段列中的所有可能組合進行GROUP BY的功能。
GROUP BY a, b, c WITH CUBE 等同於 GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (b, c), (a, c),(a), (b), (c), ())
- 1
- 2
- 1
- 2
8.3.2.4 ROLLUP
GROUPING SETS的特例,用於計算從一個維度進行層級聚合的操作。
GROUP BY a, b, c, WITH ROLLUP 等同於 GROUP BY a, b, c GROUPING SETS ( (a, b, c), (a, b), (a), ())
- 1
- 2
- 1
- 2
8.3.2.5 hive.new.job.grouping.set.cardinality
grouping set數據量大於這個值時,則增加額外的map-reduce任務以減少map側的數據量。
8.4 Order, Sort, Cluster, Distribute By與Transform
9.4.1 Order By
語法:
colOrder: ( ASC | DESC ) colNullOrder: (NULLS FIRST | NULLS LAST) -- (Note: Available in Hive 2.1.0 and later) orderBy: ORDER BY colName colOrder? colNullOrder? (',' colName colOrder? colNullOrder?)* query: SELECT expression (',' expression)* FROM src orderBy
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
注: hive.mapred.mode=strict時,必須跟limit子句。原因是order子句最后只能用一個reduce排序並輸出最后結果,如果數據量大的話,會花很長時間。
8.4.2 Sort By
語法:
colOrder: ( ASC | DESC ) sortBy: SORT BY colName colOrder? (',' colName colOrder?)* query: SELECT expression (',' expression)* FROM src sortBy
- 1
- 2
- 3
- 1
- 2
- 3
說明: 輸出到reducer前,根據列進行行排序,即多個reduce時,分別進行排序。
8.4.3 Distribute By
Hive在列上使用Distribute By,為了分散行到不同的reducer。擁有相同Distribute By列的所有行會進入相同的reducer。
8.4.4 Cluster By
Cluster By是Distribute By和Sort By的結合。但是排序只能是倒序排序,不能指定排序規則為asc 或者desc。
8.4.5 Transform
調用用戶自定義的 Hive 使用的 Map/Reduce 腳本。
表student(classNo,stuNo,score)
#! /usr/bin/env python import sys for line in sys.stdin: (classNo,stuNo,score) = line.strip().split('\t') ifint(score) >= 60: print"%s\t%s\t%s" %(classNo,stuNo,score)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
add file /home/user/score_pass.py;
- 1
- 1
select transform(classNo, stuNo, score) using'score_pass.py' as classNo, stuNo, score from student;
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
8.4.6 參考
-
[Hive的Transform功能] (http://www.tuicool.com/articles/fuimmmQ)
-
[Hive查詢] (http://blog.csdn.net/zythy/article/details/18814781)
8.5 Join
8.5.1 語法
join_table:
table_reference JOIN table_factor [join_condition] | table_reference {LEFT|RIGHT|FULL} [OUTER] JOIN table_reference join_condition | table_reference LEFT SEMI JOIN table_reference join_condition | table_reference CROSS JOIN table_reference [join_condition] (as of Hive 0.10) table_reference: table_factor | join_table table_factor: tbl_name [alias] | table_subquery alias | ( table_references ) join_condition: ON equality_expression ( AND equality_expression )* equality_expression: expression = expression
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
注:Hive不支持所有非等值的連接
8.5.2 示例
如果join中多個表的 join key 是同一個,則 join 會被轉化為單個 map/reduce 任務
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key1)
- 1
- 2
- 3
- 1
- 2
- 3
這一 join 被轉化為 2 個 map/reduce 任務。因為 b.key1 用於第一次 join 條件,而 b.key2 用於第二次 join。
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2)
- 1
- 2
- 1
- 2
注:reducer 會緩存 join 序列中除了最后一個表的所有表的記錄,再通過最后一個表將結果序列化到文件系統。這一實現有助於在 reduce 端減少內存的使用量。實踐中,應該把最大的那個表寫在最后
可以通過hint改變序列化的表:
SELECT /*+ STREAMTABLE(a) */ a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key1)
- 1
- 1
8.5.3 LEFT, RIGHT, and FULL OUTER
SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)
- 1
- 1
LEFT:將保留a的所有值; RIGHT:將保留b的所有值; FULL:將保留a、b的所有值。
8.5.4 LEFT SEMI JOIN
IN/EXISTS 子查詢的一種更高效的實現。JOIN 子句中右邊的表只能在 ON 子句中設置過濾條件。
SELECT a.key, a.value FROM a WHERE a.key in (SELECT b.key FROM B);
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
可以改寫為
SELECT a.key, a.val FROM a LEFT SEMI JOIN b ON (a.key = b.key)
- 1
- 2
- 1
- 2
8.5.4 MapJoin
hive.auto.convert.join設置為true時,可能的話,執行時會自動轉換為mapjoin。
Bucket mapjoin
set hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat; set hive.optimize.bucketmapjoin = true; set hive.optimize.bucketmapjoin.sortedmerge = true;
- 1
- 2
- 3
- 1
- 2
- 3
8.5.5 優化
8.6 UNION
8.6.1 語法
select_statement UNION [ALL | DISTINCT] select_statement UNION [ALL | DISTINCT] select_statement ...
- 1
- 1
注: 1. 1.2.0以前,只支持UNION ALL; 2. UNION與UNION ALL混合使用時,UNION會覆蓋左側的UNION ALL; 3. Select的列別名必須一致(含類型)。
8.6.2 示例
From子句中使用UNION
SELECT u.id, actions.date FROM ( SELECT av.uid AS uid FROM action_video av WHERE av.date = '2008-06-03' UNION ALL SELECT ac.uid AS uid FROM action_comment ac WHERE ac.date = '2008-06-03' ) actions JOIN users u ON (u.id = actions.uid)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
可以在DDL與Insert語句中使用;
子句中使用ORDER BY, SORT BY, CLUSTER BY, DISTRIBUTE BY 或 LIMIT等需要括號括起來。
SELECT key FROM (SELECT key FROM src ORDER BY key LIMIT 10)subq1 UNION SELECT key FROM (SELECT key FROM src1 ORDER BY key LIMIT 10)subq2
- 1
- 2
- 3
- 1
- 2
- 3
UNION語句中使用ORDER BY, SORT BY, CLUSTER BY, DISTRIBUTE BY 或 LIMIT等,需要放到最后。
SELECT key FROM src UNION SELECT key FROM src1 ORDER BY key LIMIT 10
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
8.7 TABLESAMPLE
用來從Hive表中根據一定的規則進行數據取樣。
8.7.1 分桶表取樣
table_sample: TABLESAMPLE (BUCKET x OUT OF y [ON colname]) -- 用來從Hive表中根據一定的規則進行數據取樣 -- 該語句表示將表lxw1隨機分成10個桶,抽樣第一個桶的數據; SELECT COUNT(1) FROM lxw1 TABLESAMPLE (BUCKET 1 OUT OF 10 ON rand());
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
8.7.2 塊取樣
block_sample: TABLESAMPLE (n PERCENT) SELECT * FROM source TABLESAMPLE(0.1 PERCENT) s; block_sample: TABLESAMPLE (ByteLengthLiteral) ByteLengthLiteral : (Digit)+ ('b' | 'B' | 'k' | 'K' | 'm' | 'M' | 'g' | 'G') SELECT * FROM source TABLESAMPLE(100M) s; block_sample: TABLESAMPLE (n ROWS) SELECT * FROM source TABLESAMPLE(10 ROWS);
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
8.8 子查詢
可以在from、where中使用子查詢。
8.9 虛擬列
虛擬列 | 說明 |
---|---|
INPUT__FILE__NAME | mapper任務的輸入文件名 |
BLOCK__OFFSET__INSIDE__FILE | 當前全局文件位置 |
8.10 UDF
見第9章
8.11 Lateral View
8.11.1 語法
lateralView: LATERAL VIEW udtf(expression) tableAlias AS columnAlias (',' columnAlias)* fromClause: FROM baseTable (lateralView)*
- 1
- 2
- 1
- 2
通過Lateral view可以方便的將UDTF得到的行轉列的結果集合在一起提供服務。
8.11.2 示例
數據
Array col1 | Array col2 |
---|---|
[1, 2] | [a”, “b”, “c”] |
[3, 4] | [d”, “e”, “f”] |
SELECT myCol1, col2 FROM baseTable
LATERAL VIEW explode(col1) myTable1 AS myCol1;
int mycol1 | Array<string> col2 ---------- | ------------------ 1 | [a", "b", "c"] 2 | [a", "b", "c"] 3 | [d", "e", "f"] 4 | [d", "e", "f"]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
8.11.3 多LATERAL VIEW
SELECT myCol1, myCol2 FROM baseTable LATERAL VIEW explode(col1) myTable1 AS myCol1 LATERAL VIEW explode(col2) myTable2 AS myCol2;
- 1
- 2
- 3
- 1
- 2
- 3
8.11.4 Outer Lateral Views
與外鏈接一樣。Lateral View是
SELEC * FROM src LATERAL VIEW explode(array()) C AS a limit 1;
-- 無結果返回 SELECT * FROM src LATERAL VIEW OUTER explode(array()) C AS a limit 1; -- 返回 河北省 NULL
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
8.12 窗口分析
8.12.1 窗口函數
准備數據
CREATE EXTERNAL TABLE demo_win_fun ( cookieid string, createtime string, --頁面訪問時間 url STRING --被訪問頁面 ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile location '/hw/hive/win/'; cookie1,2015-04-10 10:00:02,url2 cookie1,2015-04-10 10:00:00,url1 cookie1,2015-04-10 10:03:04,1url3 cookie1,2015-04-10 10:50:05,url6 cookie1,2015-04-10 11:00:00,url7 cookie1,2015-04-10 10:10:00,url4 cookie1,2015-04-10 10:50:01,url5 cookie2,2015-04-10 10:00:02,url22 cookie2,2015-04-10 10:00:00,url11 cookie2,2015-04-10 10:03:04,1url33 cookie2,2015-04-10 10:50:05,url66 cookie2,2015-04-10 11:00:00,url77 cookie2,2015-04-10 10:10:00,url44 cookie2,2015-04-10 10:50:01,url55
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
8.12.1.1 LEAD
語法:
LEAD(col, n, DEFAULT) 用於統計窗口內往下第n行值。 第一個參數為列名; 第二個參數為往下第n行(可選,默認為1); 第三個參數為默認值(當往下第n行為NULL時候,取默認值,如不指定,則為NULL)
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
示例
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LEAD(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS next_1_time, LEAD(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS next_2_time FROM demo_win_fun; cookieid createtime url rn next_1_time next_2_time ------------------------------------------------------------------------------------------- cookie1 2015-04-10 10:00:00 url1 1 2015-04-10 10:00:02 2015-04-10 10:03:04 cookie1 2015-04-10 10:00:02 url2 2 2015-04-10 10:03:04 2015-04-10 10:10:00 cookie1 2015-04-10 10:03:04 1url3 3 2015-04-10 10:10:00 2015-04-10 10:50:01 cookie1 2015-04-10 10:10:00 url4 4 2015-04-10 10:50:01 2015-04-10 10:50:05 cookie1 2015-04-10 10:50:01 url5 5 2015-04-10 10:50:05 2015-04-10 11:00:00 cookie1 2015-04-10 10:50:05 url6 6 2015-04-10 11:00:00 NULL cookie1 2015-04-10 11:00:00 url7 7 1970-01-01 00:00:00 NULL cookie2 2015-04-10 10:00:00 url11 1 2015-04-10 10:00:02 2015-04-10 10:03:04 cookie2 2015-04-10 10:00:02 url22 2 2015-04-10 10:03:04 2015-04-10 10:10:00 cookie2 2015-04-10 10:03:04 1url33 3 2015-04-10 10:10:00 2015-04-10 10:50:01 cookie2 2015-04-10 10:10:00 url44 4 2015-04-10 10:50:01 2015-04-10 10:50:05 cookie2 2015-04-10 10:50:01 url55 5 2015-04-10 10:50:05 2015-04-10 11:00:00 cookie2 2015-04-10 10:50:05 url66 6 2015-04-10 11:00:00 NULL cookie2 2015-04-10 11:00:00 url77 7 1970-01-01 00:00:00 NULL
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
8.12.1.2 LAG
語法
LAG(col,n,DEFAULT) 用於統計窗口內往上第n行值 第一個參數為列名, 第二個參數為往上第n行(可選,默認為1), 第三個參數為默認值(當往上第n行為NULL時候,取默認值,如不指定,則為NULL)
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
示例
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAG(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS last_1_time, LAG(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS last_2_time FROM demo_win_fun; cookieid createtime url rn last_1_time last_2_time ------------------------------------------------------------------------------------------- cookie1 2015-04-10 10:00:00 url1 1 1970-01-01 00:00:00 NULL cookie1 2015-04-10 10:00:02 url2 2 2015-04-10 10:00:00 NULL cookie1 2015-04-10 10:03:04 1url3 3 2015-04-10 10:00:02 2015-04-10 10:00:00 cookie1 2015-04-10 10:10:00 url4 4 2015-04-10 10:03:04 2015-04-10 10:00:02 cookie1 2015-04-10 10:50:01 url5 5 2015-04-10 10:10:00 2015-04-10 10:03:04 cookie1 2015-04-10 10:50:05 url6 6 2015-04-10 10:50:01 2015-04-10 10:10:00 cookie1 2015-04-10 11:00:00 url7 7 2015-04-10 10:50:05 2015-04-10 10:50:01 cookie2 2015-04-10 10:00:00 url11 1 1970-01-01 00:00:00 NULL cookie2 2015-04-10 10:00:02 url22 2 2015-04-10 10:00:00 NULL cookie2 2015-04-10 10:03:04 1url33 3 2015-04-10 10:00:02 2015-04-10 10:00:00 cookie2 2015-04-10 10:10:00 url44 4 2015-04-10 10:03:04 2015-04-10 10:00:02 cookie2 2015-04-10 10:50:01 url55 5 2015-04-10 10:10:00 2015-04-10 10:03:04 cookie2 2015-04-10 10:50:05 url66 6 2015-04-10 10:50:01 2015-04-10 10:10:00 cookie2 2015-04-10 11:00:00 url77 7 2015-04-10 10:50:05 2015-04-10 10:50:01
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
8.12.1.3 FIRST_VALUE
語法
FIRST_VALUE(col)取分組內排序后,截止到當前行,第一個值
- 1
- 1
示例
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS first1 FROM demo_win_fun; cookieid createtime url rn first1 --------------------------------------------------------- cookie1 2015-04-10 10:00:00 url1 1 url1 cookie1 2015-04-10 10:00:02 url2 2 url1 cookie1 2015-04-10 10:03:04 1url3 3 url1 cookie1 2015-04-10 10:10:00 url4 4 url1 cookie1 2015-04-10 10:50:01 url5 5 url1 cookie1 2015-04-10 10:50:05 url6 6 url1 cookie1 2015-04-10 11:00:00 url7 7 url1 cookie2 2015-04-10 10:00:00 url11 1 url11 cookie2 2015-04-10 10:00:02 url22 2 url11 cookie2 2015-04-10 10:03:04 1url33 3 url11 cookie2 2015-04-10 10:10:00 url44 4 url11 cookie2 2015-04-10 10:50:01 url55 5 url11 cookie2 2015-04-10 10:50:05 url66 6 url11 cookie2 2015-04-10 11:00:00 url77 7 url11
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
8.12.1.4 LAST_VALUE
語法
LAST_VALUE(col)取分組內排序后,截止到當前行,最后一個值
- 1
- 1
示例
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1 FROM demo_win_fun; cookieid createtime url rn last1 ----------------------------------------------------------------- cookie1 2015-04-10 10:00:00 url1 1 url1 cookie1 2015-04-10 10:00:02 url2 2 url2 cookie1 2015-04-10 10:03:04 1url3 3 1url3 cookie1 2015-04-10 10:10:00 url4 4 url4 cookie1 2015-04-10 10:50:01 url5 5 url5 cookie1 2015-04-10 10:50:05 url6 6 url6 cookie1 2015-04-10 11:00:00 url7 7 url7 cookie2 2015-04-10 10:00:00 url11 1 url11 cookie2 2015-04-10 10:00:02 url22 2 url22 cookie2 2015-04-10 10:03:04 1url33 3 1url33 cookie2 2015-04-10 10:10:00 url44 4 url44 cookie2 2015-04-10 10:50:01 url55 5 url55 cookie2 2015-04-10 10:50:05 url66 6 url66 cookie2 2015-04-10 11:00:00 url77 7 url77
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
8.12.2 OVER子句
數據准備
CREATE EXTERNAL TABLE demo_over ( cookieid string, createtime string, --day pv INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile location '/hw/hive/over/'; cookie1,2015-04-10,1 cookie1,2015-04-11,5 cookie1,2015-04-12,7 cookie1,2015-04-13,3 cookie1,2015-04-14,2 cookie1,2015-04-15,4 cookie1,2015-04-16,4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
如果不指定ROWS BETWEEN,默認為從起點到當前行; 如果不指定ORDER BY,則將分組內所有值累加; 關鍵是理解ROWS BETWEEN含義,也叫做WINDOW子句: PRECEDING:往前 FOLLOWING:往后 CURRENT ROW:當前行 UNBOUNDED:起點, UNBOUNDED PRECEDING 表示從前面的起點, UNBOUNDED FOLLOWING:表示到后面的終點
8.12.2.1 SUM
語法
SUM(col) 結果和ORDER BY相關,默認為升序
- 1
- 1
示例
SELECT cookieid, createtime, pv, SUM(pv) OVER(PARTITION BY cookieid ORDER BY createtime) AS pv1, -- 默認為從起點到當前行 SUM(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS pv2, --從起點到當前行,結果同pv1 SUM(pv) OVER(PARTITION BY cookieid) AS pv3, --分組內所有行 SUM(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS pv4, --當前行+往前3行 SUM(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) AS pv5, --當前行+往前3行+往后1行 SUM(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS pv6 ---當前行+往后所有行 FROM demo_over order by createtime; cookieid createtime pv pv1 pv2 pv3 pv4 pv5 pv6 ----------------------------------------------------------------------------- cookie1 2015-04-10 1 1 1 26 1 6 26 cookie1 2015-04-11 5 6 6 26 6 13 25 cookie1 2015-04-12 7 13 13 26 13 16 20 cookie1 2015-04-13 3 16 16 26 16 18 13 cookie1 2015-04-14 2 18 18 26 17 21 10 cookie1 2015-04-15 4 22 22 26 16 20 8 cookie1 2015-04-16 4 26 26 26 13 13 4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.2.2 COUNT
示例
SELECT cookieid, createtime, pv, COUNT(pv) OVER(PARTITION BY cookieid ORDER BY createtime) AS pv1, -- 默認為從起點到當前行 COUNT(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS pv2, --從起點到當前行,結果同pv1 COUNT(pv) OVER(PARTITION BY cookieid) AS pv3, --分組內所有行 COUNT(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS pv4, --當前行+往前3行 COUNT(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) AS pv5, --當前行+往前3行+往后1行 COUNT(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS pv6 ---當前行+往后所有行 FROM demo_over order by createtime; cookieid createtime pv pv1 pv2 pv3 pv4 pv5 pv6 ----------------------------------------------------------------------------- cookie1 2015-04-10 1 1 1 7 1 2 7 cookie1 2015-04-11 5 2 2 7 2 3 6 cookie1 2015-04-12 7 3 3 7 3 4 5 cookie1 2015-04-13 3 4 4 7 4 5 4 cookie1 2015-04-14 2 5 5 7 4 5 3 cookie1 2015-04-15 4 6 6 7 4 5 2 cookie1 2015-04-16 4 7 7 7 4 4 1
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.2.3 AVG
示例
SELECT cookieid, createtime, pv, AVG(pv) OVER(PARTITION BY cookieid ORDER BY createtime) AS pv1, -- 默認為從起點到當前行 AVG(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS pv2, --從起點到當前行,結果同pv1 AVG(pv) OVER(PARTITION BY cookieid) AS pv3, --分組內所有行 AVG(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS pv4, --當前行+往前3行 AVG(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) AS pv5, --當前行+往前3行+往后1行 AVG(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS pv6 ---當前行+往后所有行 FROM demo_over order by createtime; cookieid createtime pv pv1 pv2 pv3 pv4 pv5 pv6 ----------------------------------------------------------------------------- cookie1 2015-04-10 1 1.0 1.0 3.7142857142857144 1.0 3.0 3.7142857142857144 cookie1 2015-04-11 5 3.0 3.0 3.7142857142857144 3.0 4.333333333333333 4.166666666666667 cookie1 2015-04-12 7 4.333333333333333 4.333333333333333 3.7142857142857144 4.333333333333333 4.0 4.0 cookie1 2015-04-13 3 4.0 4.0 3.7142857142857144 4.0 3.6 3.25 cookie1 2015-04-14 2 3.6 3.6 3.7142857142857144 4.25 4.2 3.3333333333333335 cookie1 2015-04-15 4 3.6666666666666665 3.6666666666666665 3.7142857142857144 4.0 4.0 4.0 cookie1 2015-04-16 4 3.7142857142857144 3.7142857142857144 3.7142857142857144 3.25 3.25 4.0
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.2.4 MIN
示例
SELECT cookieid, createtime, pv, MIN(pv) OVER(PARTITION BY cookieid ORDER BY createtime) AS pv1, -- 默認為從起點到當前行 MIN(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS pv2, --從起點到當前行,結果同pv1 MIN(pv) OVER(PARTITION BY cookieid) AS pv3, --分組內所有行 MIN(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS pv4, --當前行+往前3行 MIN(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) AS pv5, --當前行+往前3行+往后1行 MIN(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS pv6 ---當前行+往后所有行 FROM demo_over order by createtime; cookieid createtime pv pv1 pv2 pv3 pv4 pv5 pv6 ----------------------------------------------------------------------------- cookie1 2015-04-10 1 1 1 1 1 1 1 cookie1 2015-04-11 5 1 1 1 1 1 2 cookie1 2015-04-12 7 1 1 1 1 1 2 cookie1 2015-04-13 3 1 1 1 1 1 2 cookie1 2015-04-14 2 1 1 1 2 2 2 cookie1 2015-04-15 4 1 1 1 2 2 4 cookie1 2015-04-16 4 1 1 1 2 2 4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.2.5 MAX
示例
SELECT cookieid, createtime, pv, MAX(pv) OVER(PARTITION BY cookieid ORDER BY createtime) AS pv1, -- 默認為從起點到當前行 MAX(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS pv2, --從起點到當前行,結果同pv1 MAX(pv) OVER(PARTITION BY cookieid) AS pv3, --分組內所有行 MAX(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS pv4, --當前行+往前3行 MAX(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) AS pv5, --當前行+往前3行+往后1行 MAX(pv) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS pv6 ---當前行+往后所有行 FROM demo_over order by createtime; cookieid createtime pv pv1 pv2 pv3 pv4 pv5 pv6 ----------------------------------------------------------------------------- cookie1 2015-04-10 1 1 1 7 1 5 7 cookie1 2015-04-11 5 5 5 7 5 7 7 cookie1 2015-04-12 7 7 7 7 7 7 7 cookie1 2015-04-13 3 7 7 7 7 7 4 cookie1 2015-04-14 2 7 7 7 7 7 4 cookie1 2015-04-15 4 7 7 7 7 7 4 cookie1 2015-04-16 4 7 7 7 4 4 4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.3 分析函數
數據准備
d1,user1,1000
d1,user2,2000
d1,user3,3000
d2,user4,4000
d2,user5,5000
CREATE EXTERNAL TABLE analytics_1 ( dept STRING, userid string, sal INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile location '/hw/hive/analytics/1';
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
cookie1,2015-04-10,1
cookie1,2015-04-11,5
cookie1,2015-04-12,7
cookie1,2015-04-13,3
cookie1,2015-04-14,2
cookie1,2015-04-15,4
cookie1,2015-04-16,4
cookie2,2015-04-10,2
cookie2,2015-04-11,3
cookie2,2015-04-12,5
cookie2,2015-04-13,6
cookie2,2015-04-14,3
cookie2,2015-04-15,9
cookie2,2015-04-16,7
CREATE EXTERNAL TABLE analytics_2 ( cookieid string, createtime string, --day pv INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile location '/hw/hive/analytics/2';
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
8.12.3.1 RANK
語法
RANK() 生成數據項在分組中的排名,排名相等會在名次中留下空位
- 1
- 1
8.12.3.2 DENSE_RANK
語法
DENSE_RANK() 生成數據項在分組中的排名,排名相等會在名次中不會留下空位
- 1
- 1
示例
SELECT cookieid, createtime, pv, RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1, DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn3 FROM analytics_2 WHERE cookieid = 'cookie1'; cookieid day pv rn1 rn2 rn3 -------------------------------------------------- cookie1 2015-04-12 7 1 1 1 cookie1 2015-04-11 5 2 2 2 cookie1 2015-04-15 4 3 3 3 cookie1 2015-04-16 4 3 3 4 cookie1 2015-04-13 3 5 4 5 cookie1 2015-04-14 2 6 5 6 cookie1 2015-04-10 1 7 6 7 rn1: 15號和16號並列第3, 13號排第5 rn2: 15號和16號並列第3, 13號排第4 rn3: 如果相等,則按記錄值排序,生成唯一的次序,如果所有記錄值都相等,或許會隨機排吧。
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
8.12.3.3 ROW_NUMBER
語法
ROW_NUMBER() –從1開始,按照順序,生成分組內記錄的序列 比如,按照pv降序排列,生成分組內每天的pv名次 ROW_NUMBER() 的應用場景非常多,再比如,獲取分組內排序第一的記錄;獲取一個session中的第一條refer等。
- 1
- 2
- 3
- 1
- 2
- 3
示例
SELECT cookieid, createtime, pv, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn FROM analytics_2; cookieid day pv rn ------------------------------------------- cookie1 2015-04-12 7 1 cookie1 2015-04-11 5 2 cookie1 2015-04-15 4 3 cookie1 2015-04-16 4 4 cookie1 2015-04-13 3 5 cookie1 2015-04-14 2 6 cookie1 2015-04-10 1 7 cookie2 2015-04-15 9 1 cookie2 2015-04-16 7 2 cookie2 2015-04-13 6 3 cookie2 2015-04-12 5 4 cookie2 2015-04-14 3 5 cookie2 2015-04-11 3 6 cookie2 2015-04-10 2 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
8.12.3.4 CUME_DIST
語法
CUME_DIST() 小於等於當前值的行數/分組內總行數
比如,統計小於等於當前薪水的人數,所占總人數的比例
- 1
- 2
- 1
- 2
示例
SELECT dept, userid, sal, CUME_DIST() OVER(ORDER BY sal) AS rn1, CUME_DIST() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM analytics_1; dept userid sal rn1 rn2 ------------------------------------------- d1 user1 1000 0.2 0.3333333333333333 d1 user2 2000 0.4 0.6666666666666666 d1 user3 3000 0.6 1.0 d2 user4 4000 0.8 0.5 d2 user5 5000 1.0 1.0 rn1: 沒有partition,所有數據均為1組,總行數為5, 第一行:小於等於1000的行數為1,因此,1/5=0.2 第三行:小於等於3000的行數為3,因此,3/5=0.6 rn2: 按照部門分組,dpet=d1的行數為3, 第二行:小於等於2000的行數為2,因此,2/3=0.6666666666666666
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
8.12.3.5 PERCENT_RANK
語法
PERCENT_RANK() 分組內當前行的RANK值-1/分組內總行數-1
- 1
- 1
示例
SELECT dept, userid, sal, PERCENT_RANK() OVER(ORDER BY sal) AS rn1, --分組內 RANK() OVER(ORDER BY sal) AS rn11, --分組內RANK值 SUM(1) OVER(PARTITION BY NULL) AS rn12, --分組內總行數 PERCENT_RANK() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM analytics_1; dept userid sal rn1 rn11 rn12 rn2 --------------------------------------------------- d1 user1 1000 0.0 1 5 0.0 d1 user2 2000 0.25 2 5 0.5 d1 user3 3000 0.5 3 5 1.0 d2 user4 4000 0.75 4 5 0.0 d2 user5 5000 1.0 5 5 1.0 rn1: rn1 = (rn11-1) / (rn12-1) 第一行,(1-1)/(5-1)=0/4=0 第二行,(2-1)/(5-1)=1/4=0.25 第四行,(4-1)/(5-1)=3/4=0.75 rn2: 按照dept分組, dept=d1的總行數為3 第一行,(1-1)/(3-1)=0 第三行,(3-1)/(3-1)=1
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
8.12.3.6 NTILE
語法
NTILE(n),用於將分組數據按照順序切分成n片,返回當前切片值。 NTILE不支持ROWS BETWEEN 如果切片不均勻,默認增加第一個切片的分布
- 1
- 2
- 3
- 1
- 2
- 3
示例
SELECT cookieid, createtime, pv, NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn1, --分組內將數據分成2片 NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn2, --分組內將數據分成3片 NTILE(4) OVER(ORDER BY createtime) AS rn3 --將所有數據分成4片 FROM analytics_2 ORDER BY cookieid,createtime; cookieid day pv rn1 rn2 rn3 ------------------------------------------------- cookie1 2015-04-10 1 1 1 1 cookie1 2015-04-11 5 1 1 1 cookie1 2015-04-12 7 1 1 2 cookie1 2015-04-13 3 1 2 2 cookie1 2015-04-14 2 2 2 3 cookie1 2015-04-15 4 2 3 3 cookie1 2015-04-16 4 2 3 4 cookie2 2015-04-10 2 1 1 1 cookie2 2015-04-11 3 1 1 1 cookie2 2015-04-12 5 1 1 2 cookie2 2015-04-13 6 1 2 2 cookie2 2015-04-14 3 2 2 3 cookie2 2015-04-15 9 2 3 4 cookie2 2015-04-16 7 2 3 4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
8.12.4 Distinct(2.1.0及以后)
COUNT(DISTINCT a) OVER (PARTITION BY c)
- 1
- 1
8.12.5 OVER子句中使用聚合函數(2.1.0及以后)
SELECT rank() OVER (ORDER BY sum(b)) FROM T GROUP BY a;
- 1
- 2
- 3
- 1
- 2
- 3
8.12.6 參考
Hive分析窗口函數(二) NTILE,ROW_NUMBER,RANK,DENSE_RANK
Hive分析窗口函數(三) CUME_DIST,PERCENT_RANK
Hive分析窗口函數(四) LAG,LEAD,FIRST_VALUE,LAST_VALUE
8.13 With表達式
指定WITH語句,通過查詢生成的臨時的結果集。可以用於SELECT, INSERT, CREATE TABLE AS SELECT, 或者CREATE VIEW AS SELECT語句。
1. 子查詢中不支持with語句; 2. 遞歸查詢不被支持。
8.13.1 語法
withClause: cteClause (, cteClause)* cteClause: cte_name AS (select statment)
- 1
- 2
- 1
- 2
8.13.2 示例
with q1 as ( select key from src where key = '5')
select *
from q1;
-- from style
with q1 as (select * from src where key= '5')
from q1
select *;
-- chaining CTEs
with q1 as ( select key from q2 where key = '5'),
q2 as ( select key from src where key = '5')
select * from (select key from q1) a;
-- union example
with q1 as (select * from src where key= '5'),
q2 as (select * from src s2 where key = '4')
select * from q1 union all select * from q2;
-- insert example
create table s1 like src;
with q1 as ( select key, value from src where key = '5')
from q1
insert overwrite table s1
select *;
-- ctas example
create table s2 as
with q1 as ( select key from src where key = '4')
select * from q1;
-- view example
create view v1 as
with q1 as ( select key from src where key = '5')
select * from q1;
select * from v1;
-- view example, name collision
create view v1 as
with q1 as ( select key from src where key = '5')
select * from q1;
with q1 as ( select key from src where key = '4')
select * from v1;
8.14 參考
- [官方:LanguageManual Select] (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select)