hadoop配置支持LZO壓縮格式並支持分片


【簡介】

1@LZO本身是不支持分片的,但是我們給LZO壓縮的文件加上索引,就支持分片了

2@Linux本身是不支持LZO壓縮的,所以我們需要下載安裝軟件包,其中包括三個:lzo,lzop,hdoop-gpl-packaging.

3@hdoop-gpl-packaging的主要作用就是給壓縮的LZO文件創建索引,否則LZO是不支持分片的,無論文件有多大,都只能有一個map

 

【說明】因為我的數據沒有在壓縮后還超過128M的,所以為了演示,在lzo壓縮的文件即使超過一個塊的大小依舊只用一個map進行,我把塊的大小改為10M

[hadoop@hadoop001 hadoop]$ vi hdfs-site.xml 

<property>
<name>dfs.blocksize</name>
<value>10485760</value>
</property>

【安裝相關依賴】

安裝以前先執行以下命令

[hadoop@hadoop001 ~]$ which lzop

/usr/bin/lzop

  【注意】這代表你已經有lzop,如果找不到,就執行以下命令

#若沒有執行如下安裝命令【這些命令一定要在root用戶下安裝,否則沒有權限】
[root@hadoop001 ~]# yum install -y svn ncurses-devel

[root@hadoop001 ~]# yum install -y gcc gcc-c++ make cmake

[root@hadoop001 ~]# yum install -y openssl openssl-devel svn ncurses-devel zlib-devel libtool

[root@hadoop001 ~]# yum install -y lzo lzo-devel lzop autoconf automake cmake 

[root@hadoop001 ~]# yum -y install lzo-devel zlib-devel gcc autoconf automake libtool

 

【用lzop工具壓縮測試數據文件】

lzo壓縮:lzop -v filename
lzo解壓:lzop -dv filename

[hadoop@hadoop001 data]$ ll

-rw-r--r-- 1 hadoop hadoop 68051224 Apr 17 17:37 part-r-00000

[hadoop@hadoop001 data]$ lzop -v part-r-00000
compressing part-r-00000 into part-r-00000.lzo

[hadoop@hadoop001 data]$ ll

-rw-r--r-- 1 hadoop hadoop 68051224 Apr 17 17:37 part-r-00000
-rw-r--r-- 1 hadoop hadoop 29975501 Apr 17 17:37 part-r-00000.lzo    ##壓縮好的測試數據

[hadoop@hadoop001 data]$ pwd
/home/hadoop/data
[hadoop@hadoop001 data]$ du -sh /home/hadoop/data/*
65M /home/hadoop/data/part-r-00000
29M /home/hadoop/data/part-r-00000.lzo

 

【安裝hadoop-lzo】

[hadoop@hadoop001 app]$ wget https://github.com/twitter/hadoop-lzo/archive/master.zip
--2019-04-18 14:02:32-- https://github.com/twitter/hadoop-lzo/archive/master.zip
Resolving github.com... 13.250.177.223, 52.74.223.119, 13.229.188.59
Connecting to github.com|13.250.177.223|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/twitter/hadoop-lzo/zip/master [following]
--2019-04-18 14:02:33-- https://codeload.github.com/twitter/hadoop-lzo/zip/master
Resolving codeload.github.com... 13.229.189.0, 13.250.162.133, 54.251.140.56
Connecting to codeload.github.com|13.229.189.0|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/zip]
Saving to: “master.zip”

[ <=> ] 1,040,269 86.7K/s in 11s

2019-04-18 14:02:44 (95.4 KB/s) - “master.zip” saved [1040269]

[hadoop@hadoop001 app]$ll

 

-rw-rw-r-- 1 hadoop hadoop 1040269 Apr 18 14:02 master.zip

[hadoop@hadoop001 app]$ unzip master.zip

-rw-rw-r-- 1 hadoop hadoop 1040269 Apr 18 14:02 master.zip

drwxrwxr-x  5 hadoop hadoop      4096 Apr 17 13:42 hadoop-lzo-master  #解壓以后的東東  不要問我為什么名字變了

 

 

[hadoop@hadoop001 hadoop-lzo-master]$ pwd
/home/hadoop/app/hadoop-lzo-master

[hadoop@hadoop001 hadoop-lzo-master]$ vi pom.xml 

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.current.version>2.6.0</hadoop.current.version>
<hadoop.old.version>1.0.4</hadoop.old.version>
</properties>

 

###這里的這四個步驟目前沒弄懂是什么意思,回頭有進展會進行說明,但是我參考另外一篇博客上,並沒有這四部,所以應該並不影響

[hadoop@hadoop001 hadoop-lzo-master]$ export CFLAGS=-m64

[hadoop@hadoop001 hadoop-lzo-master]$ export CXXFLAGS=-m64

[hadoop@hadoop001 hadoop-lzo-master]$ export C_INCLUDE_PATH=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/lzo/include

[hadoop@hadoop001 hadoop-lzo-master]$ export LIBRARY_PATH=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/lzo/lib

###

【mvn編譯】

[hadoop@hadoop001 hadoop-lzo-master]$ mvn clean package -Dmaven.test.skip=true

#mvn編譯,等待出現BUILD SUCCESS則表示編譯成功,幾分鍾左右

】我在第一次執行的時候一上來就報錯

Could not create local repository at /root/maven_repo/repo -> [Help 1]

看到這個錯誤就感覺是用戶不同步的問題,因為我當時編譯hadoop的時候,怕出現權限問題,就用root用戶編譯的,所以庫也健在root用戶下,並沒有考慮到這里還會用到這個庫,所以報錯的時候感覺很絕望。

解決方法:

嘗試一:我把maven_repo這個文件夾及以下的文件夾的權限放到最大   失敗

嘗試二:我把maven_repo的用戶和用戶組改為hadoop   失敗

嘗試三:這個方法一開始就想到了,但是怕有問題,所以不敢用,最后沒辦法了,抱着試試的心態進行

把maven_repo這個文件夾移動到hadoop用下的app文件夾下

然后去maven安裝目錄修改setting文件的本地庫的目錄,讓maven_repo所在目錄跟setting配置一致

[hadoop@hadoop001 repo]$ pwd
/home/hadoop/app/maven_repo/repo

再次運行以上命令

[hadoop@hadoop001 hadoop-lzo-master]$ mvn clean package -Dmaven.test.skip=true

BUILD SUCCESS

成功啦!!!!歡呼!!!!!!!!!!!!!

 

【查看編譯后的jar,hadoop-lzo-0.4.21-SNAPSHOT.jar則為我們需要的jar】

[hadoop@hadoop001 target]$ pwd
/home/hadoop/app/hadoop-lzo-master/target

[hadoop@hadoop001 target]$ ll
total 448
drwxrwxr-x 2 hadoop hadoop 4096 Apr 17 13:43 antrun
drwxrwxr-x 5 hadoop hadoop 4096 Apr 17 13:43 apidocs
drwxrwxr-x 5 hadoop hadoop 4096 Apr 17 13:42 classes
drwxrwxr-x 3 hadoop hadoop 4096 Apr 17 13:42 generated-sources
-rw-rw-r-- 1 hadoop hadoop 180807 Apr 17 13:43 hadoop-lzo-0.4.21-SNAPSHOT.jar
-rw-rw-r-- 1 hadoop hadoop 184553 Apr 17 13:43 hadoop-lzo-0.4.21-SNAPSHOT-javadoc.jar
-rw-rw-r-- 1 hadoop hadoop 52024 Apr 17 13:43 hadoop-lzo-0.4.21-SNAPSHOT-sources.jar
drwxrwxr-x 2 hadoop hadoop 4096 Apr 17 13:43 javadoc-bundle-options
drwxrwxr-x 2 hadoop hadoop 4096 Apr 17 13:43 maven-archiver
drwxrwxr-x 3 hadoop hadoop 4096 Apr 17 13:42 native
drwxrwxr-x 3 hadoop hadoop 4096 Apr 17 13:43 test-classes

【上傳jar包】

將hadoop-lzo-0.4.21-SNAPSHOT.jar包復制到我們的hadoop的$HADOOP_HOME/share/hadoop/common/目錄下才能被hadoop使用

[hadoop@hadoop001 hadoop-lzo-master]$ cp hadoop-lzo-0.4.21-SNAPSHOT.jar ~/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/

[hadoop@hadoop001 hadoop-lzo-master]$ ll ~/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/hadoop-lzo*
-rw-rw-r-- 1 hadoop hadoop 180807 Apr 17 13:52 /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/hadoop-lzo-0.4.21-SNAPSHOT.jar

 

【配置hadoop文件core-site.xml 和mapred-site.xml】

【注意】在配置之前先把集群給關了,否則可能會有坑,配置完以后關閉集群,再開啟,啟動不起來,說端口被占用,但是jps查不到,也就是說進程處於家私狀態了

[hadoop@hadoop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/core-site.xml

<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

解析】:主要是配置com.hadoop.compression.lzo.LzoCodec、com.hadoop.compression.lzo.LzopCodec壓縮類
io.compression.codec.lzo.class必須指定為LzoCodec非LzopCodec,不然壓縮后的文件不會支持分片的

vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/mapred-site.xml

<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.BZip2Codec</value>
</property>

【啟動hadoop】

[hadoop@hadoop001 sbin]$ pwd
/home/hadoop/app/hadoop/sbin

[hadoop@hadoop001 sbin]$start-all.sh

【啟動hive測試分片  ##以下內容都是在hive的默認數據庫里進行的】

   【建表】

CREATE EXTERNAL TABLE g6_access_lzo (
cdn string,
region string,
level string,
time string,
ip string,
domain string,
url string,
traffic bigint)
PARTITIONED BY (
day string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
LOCATION '/g6/hadoop/access/compress';

注意】#創建LZO壓縮文件測試表,若hadoop的common目錄沒有hadoop-lzo的jar,就會報類DeprecatedLzoTextInputFormat找不到異常

【刷新分區】

alter table g6_access_lzo add if not exists partition(day='20190418');

【從本地load LZO壓縮數據到表g6_access_lzo】

LOAD DATA LOCAL INPATH '/home/hadoop/data/part-r-00000.lzo' OVERWRITE INTO TABLE g6_access_lzo partition (day="20190418");

[hadoop@hadoop001 sbin]$ hadoop fs -du -s -h /g6/hadoop/access/compress/day=20190418/*
28.6 M 28.6 M /g6/hadoop/access/compress/day=20190418/part-r-00000.lzo

【查詢測試】

hive (default)> select count(1) from g6_access_lzo;

##日志查看##Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.69 sec HDFS Read: 29982759 HDFS Write: 57 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 690 msec

【結論1】因為我們的塊大小是給設定了10M,而part-r-00000.lzo這個lzo壓縮文件的大小遠遠大於10M,但是我們可以看見Map只有一個,可見lzo是不支持分片的

 

【lzo支持分片測試】

#開啟壓縮,生成的壓縮文件格式必須為設置為LzopCodec,lzoCode的壓縮文件格式后綴為.lzo_deflate是無法創建索引的。
SET hive.exec.compress.output=true;
SET mapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec;

【創建分片測試表】

create table g6_access_lzo_split
STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
as select * from g6_access;

hive (default)> desc formatted g6_access_lzo_split;  #找到表數據的位置

Location:           hdfs://hadoop001:9000/user/hive/warehouse/g6_access_lzo_split

[hadoop@hadoop001 sbin]$ hadoop fs -du -s -h /user/hive/warehouse/g6_access_lzo_split/*
28.9 M 28.9 M /user/hive/warehouse/g6_access_lzo_split/000000_0.lzo

#構建LZO文件索引,使用我們之前打的jar包中的工具類

[hadoop@hadoop001 ~]$ hadoop jar ~/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/hadoop-lzo-0.4.21-SNAPSHOT.jar \
com.hadoop.compression.lzo.LzoIndexer /user/hive/warehouse/wsktest.db/page_views2_lzo_split

[hadoop@hadoop001 sbin]$ hadoop fs -du -s -h /user/hive/warehouse/g6_access_lzo_split/*
28.9 M 28.9 M /user/hive/warehouse/g6_access_lzo_split/000000_0.lzo
2.3 K 2.3 K /user/hive/warehouse/g6_access_lzo_split/000000_0.lzo.index

 

【查詢測試】

hive (default)> select count(1) from g6_access_lzo_split;

Stage-Stage-1: Map: 3 Reduce: 1 Cumulative CPU: 5.85 sec HDFS Read: 30504786 HDFS Write: 57 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 850 msec

【結論2】這里我們可以卡到map是3,也就是說lzo壓縮文件構建索引以后是支持分片的

 

 

【總結】

大數據中常見的壓縮格式只有bzip2是支持數據分片的,lzo在文件構建索引后才會支持數據分片

 

【參考博客】

https://my.oschina.net/u/4005872/blog/3036700

https://blog.csdn.net/qq_32641659/article/details/89339471

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM