概述
只要是配置了正確的文件類型和壓縮類型(比如Textfile+Gzip、SequenceFile+Snappy等),Hive都可以預期讀取並解析數據,提供SQL功能。
SequenceFile本身的結構已經設計了內容進行壓縮。所以對於SequenceFile文件的壓縮,並不是先生成SequenceFile文件,再對文件進行壓縮。而是生成SequenceFile文件時,對其中的內容字段進行壓縮。最終壓縮后,對外仍體現為一個SequenceFile。
RCFile、ORCFile、Parquet、Avro對於壓縮的處理方式與SequenceFile相同。
文件格式
Textfile
SequenceFile
RCFile
ORCFile
Parquet
Avro
壓縮算法的編解碼器
序號 | 壓縮格式 | 算法 | 多文件 | 可分割性 | 工具 | 工具壓縮后擴展名 |
1 | DEFLATE | DEFLATE | 不 | 不 | 無 | .deflate |
2 | Gzip | DEFLATE | 不 | 不 | gzip | .gz |
3 | bzip2 | bzip2 | 不 | 是 | bzip2 | .bz2 |
4 | LZO | LZO | 不 | 不 | lzop | .lzo |
5 | LZ4 | ?? | ?? | ?? | ?? | ?? |
6 | Snappy | ?? | ?? | ?? | ?? | ?? |
7 | ZLIB | ?? | ?? | ?? | ?? | ?? |
8 | ZIP | DEFLATE | 是 | 是,在文件范圍內 | zip | .zip |
一、TEXTFILE
1、文本文件,非壓縮
文本文件格式
文件文件(TEXTFILE)格式是Hive中默認的存儲格式,在創建Hive 表時如果不指定存儲格式即為文本文件。文本文件中每一行數據就是一條記錄,可以用任意的分隔符進行分割,也方便和其他的工具進行數據共享、查看和編輯,比如sed,awk等。但相對於二進制文件,文本文件存儲的空間較大。
TEXTFILE的數據不做壓縮,磁盤開銷大,數據解析開銷大。可結合Gzip、Bzip2、Snappy等使用(系統自動檢查,執行查詢時自動解壓),但使用這種方式,hive不會對數據進行切分,從而無法對數據進行並行操作。
--創建一個表,格式為文本文件: CREATE EXTERNAL TABLE student_text (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --導入數據到此表中,將啟動MR任務 INSERT OVERWRITE TABLE student_text SELECT * FROM student;
可查看到生成的數據文件的格式為非壓縮的文本文件:
hdfs dfs -cat /user/hive/warehouse/student_text/000000_0 1001810081,cheyo 1001810082,pku 1001810083,rocky 1001810084,stephen 2002820081,sql 2002820082,hello 2002820083,hijj 3001810081,hhhhhhh 3001810082,abbbbbb
2、文本文件,DEFLATE壓縮
--創建一個表,格式為文件文件: CREATE TABLE student_text_def (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設置壓縮類型為DEFLATE壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec; --導入數據: INSERT OVERWRITE TABLE student_text_def SELECT * FROM student; --查看數據 SELECT * FROM student_text_def;
查看數據文件,可看到數據文件為多個.deflate文件。
hdfs dfs -ls /user/hive/warehouse/student_text_def/ -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000000_0.deflate -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000001_0.deflate -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000002_0.deflate
3、文本文件,Gzip壓縮
--創建一個表,格式為文件文件: CREATE TABLE student_text_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設置壓縮類型為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --導入數據: INSERT OVERWRITE TABLE student_text_gzip SELECT * FROM student; --查看數據 S SELECT * FROM student_text_gzip;
查看數據文件,可看到數據文件為多個.gz文件。解開.gz文件,可以看到明文文本:
hdfs dfs -ls /user/hive/warehouse/student_text_gzip/ -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000000_0.gz -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000001_0.gz -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000002_0.gz
4、文本文件,Bzip2壓縮
--創建一個表,格式為文件文件: CREATE TABLE student_text_bzip2 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; - --設置壓縮類型為Bzip2壓縮: SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec; --導入數據 INSERT OVERWRITE TABLE student_text_bzip2 SELECT * FROM student; --查看數據: SELECT * FROM student_text_bzip2;
查看數據文件,可看到數據文件為多個.bz2文件。解開.bz2文件,可以看到明文文本:
hdfs dfs -ls /user/hive/warehouse/student_text_bzip2 -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000000_0.bz2 -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000001_0.bz2 -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000002_0.bz2
5、文本文件,lzo壓縮
--創建表 C CREATE TABLE student_text_lzo (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設置為LZO壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec; --導入數據 INSERT OVERWRITE TABLE student_text_lzo SELECT * FROM student; --查詢數據 SELECT * FROM student_text_lzo;
create external table `cp_api_log`(`line` string) partitioned by (`date` string) stored as INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' location 'hdfs://hdfs://BigdataCluster/user/hive/warehouse/sm_data.db/ods/ods_cp_api'
查看數據文件,可看到數據文件為多個.lzo壓縮。解開.lzo文件,可以看到明文文本。
未實測,需要安裝lzop庫
6、文本文件,lz4壓縮
--創建表 C CREATE TABLE student_text_lz4 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設置為LZ4壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.Lz4Codec; --導入數據 INSERT OVERWRITE TABLE student_text_lz4 SELECT * FROM student;
查看數據文件,可看到數據文件為多個.lz4壓縮。使用ls查看.lz4文件,可以看到是壓縮后的文本。
hdfs dfs -ls /user/hive/warehouse/student_text_lz4 -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000000_0.lz4 -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000001_0.lz4 -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000002_0.lz4
7、文本文件,Snappy壓縮
--創建表 C CREATE TABLE student_text_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設置壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --導入數據 INSERT OVERWRITE TABLE student_text_snappy SELECT * FROM student; --查詢數據 SELECT * FROM student_text_snappy;
查看數據文件,可看到數據文件為多個.snappy壓縮文件。使用ls查看.snappy文件,可以看到是壓縮后的文本:
hdfs dfs -ls /user/hive/warehouse/student_text_snappy Found 3 items -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000000_0.snappy -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000001_0.snappy -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000002_0.snappy
二、SEQUENCEFILE
1、Sequence文件,DEFLATE壓縮
--創建一個表,格式為文件文件: CREATE TABLE student_seq_def (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS SEQUENCEFILE; --設置壓縮類型為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec; --導入數據: INSERT OVERWRITE TABLE student_seq_def SELECT * FROM student; --查看數據 SELECT * FROM student_seq_def;
查看數據文件,是一個密文的文件.
hdfs dfs -ls /user/hive/warehouse/student_seq_def/ -rw-r--r-- /user/hive/warehouse/student_seq_def/000000_0
2、Sequence文件,Gzip壓縮
--創建一個表,格式為文件文件: CREATE TABLE student_seq_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS SEQUENCEFILE; - --設置壓縮類型為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --導入數據: INSERT OVERWRITE TABLE student_seq_gzip SELECT * FROM student; --查看數據 SELECT * FROM student_seq_gzip;
查看數據文件,是一個密文的文件,無法通過gzip解壓:
hdfs dfs -ls /user/hive/warehouse/student_seq_gzip/ -rw-r--r-- /user/hive/warehouse/student_seq_gzip/000000_0
三、RCFILE
1、RCFILE,Gzip壓縮
CREATE TABLE student_rcfile_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS RCFILE; - --設置壓縮類型為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --導入數據: INSERT OVERWRITE TABLE student_rcfile_gzip SELECT id,name FROM student; --查看數據 SELECT * FROM student_rcfile_gzip;
四、ORCFile
ORCFile有自己的參數設置壓縮格式,一般不使用上述Hive參數設置壓縮參數。
參考文檔:LanguageManual ORC(https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC)
1、ORCFile,ZLIB壓縮
--創建表 CREATE TABLE student_orcfile_zlib (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="ZLIB"); - --導入數據 INSERT OVERWRITE TABLE student_orcfile_zlib SELECT id,name FROM student; --查詢數據 SELECT * FROM student_orcfile_zlib;
2、ORCFILE,Snappy壓縮
--創建表 C CREATE TABLE student_orcfile_snappy2 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="SNAPPY"); - --導入數據 I INSERT OVERWRITE TABLE student_orcfile_snappy2 SELECT id,name FROM student; --查詢數據 SELECT * FROM student_orcfile_snappy2;
create external table `cp_api_log`(`line` string) partitioned by (`date` string) row format serde 'com.cloudera.hive.serde.JSONSerDe' stored as orc tblproperties ("orc.compress"="SNAPPY") location 'hdfs://hdfs://BigdataCluster/user/hive/warehouse/sm_data.db/ods/ods_cp_api'
一般不使用下述方式。下述方式壓縮后,結果與上述同類型壓縮(SNAPPY)不同。具體原因待進一步研究。
--創建表 CREATE TABLE student_orcfile_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE; --設置壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --導入數據 INSERT OVERWRITE TABLE student_orcfile_snappy SELECT id,name FROM student; --查詢數據 SELECT * FROM student_orcfile_snappy;
五、Parquet
1、Parquet,Snappy壓縮
--創建表 CREATE TABLE student_parquet_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS PARQUET; --設置壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --導入數據 INSERT OVERWRITE TABLE student_parquet_snappy SELECT id,name FROM student; --查詢數據 SELECT * FROM student_parquet_snappy;
六、Avro
1、Avro,Snappy壓縮
--創建表 CREATE TABLE student_avro_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS AVRO; --設置壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --導入數據 INSERT OVERWRITE TABLE student_avro_snappy SELECT id,name FROM student; --查詢數據 SELECT * FROM student_avro_snappy;