hadoop支持lzo完整過程


簡介

  啟用lzo

    啟用lzo的壓縮方式對於小規模集群是很有用處,壓縮比率大概能降到原始日志大小的1/3。同時解壓縮的速度也比較快。

  安裝lzo

     lzo並不是linux系統原生支持,所以需要下載安裝軟件包。這里至少需要安裝3個軟件包:lzo, lzop, hadoop-gpl-packaging。

 增加索引

    gpl-packaging的作用主要是對壓縮的lzo文件創建索引,否則的話,無論壓縮文件是否大於hdfs的block大小,都只會按照默認啟動2個map操作

 

安裝lzop native library

[root@localhost ~]#  wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
 [root@localhost ~]#  tar -zxvf lzo-2.06.tar.gz [root@localhost ~]#  cd lzo-2.06 [root@localhost ~]#  export CFLAGS=-m64 [root@localhost ~]#  ./configure -enable-shared -prefix=/usr/local/hadoop/lzo/ [root@localhost ~]# make && sudo make install
 
        

編譯完lzo包之后,將/usr/local/hadoop/lzo目錄下生成的所有文件打包,並同步到集群中的所有機器上。

在編譯lzo包的時候,需要一些環境,可以用下面的命令安裝好lzo編譯環境

 

[root@localhost ~]# yum -y install lzo-devel zlib-devel gcc autoconf automake libtool
 
        

安裝hadoop-lzo


下載Twitter hadoop-lzo,解壓后的文件夾名為hadoop-lzo-master

 
        
[root@localhost ~]#  wget https://github.com/twitter/hadoop-lzo/archive/master.zip
[root@localhost ~]#  unzip master
 也可以通過git,你也可以用下面的命令去下載

[root@localhost ~]#  git clone https://github.com/twitter/hadoop-lzo.git
hadoop-lzo中的pom.xml依賴修改成Hadoop 2.9.2
<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <hadoop.current.version>2.9.2</hadoop.current.version>
    <hadoop.old.version>1.0.4</hadoop.old.version>
</properties>
然后進入hadoop-lzo-master目錄,依次執行下面的命令
[root@localhost ~]# export CFLAGS=-m64 [root@localhost ~]# export CXXFLAGS=-m64 [root@localhost ~]# export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include [root@localhost ~]# export LIBRARY_PATH=/usr/local/hadoop/lzo/lib [root@localhost ~]# mvn clean package -Dmaven.test.skip=true [root@localhost ~]# cd target/native/Linux-amd64-64 [root@localhost ~]# tar -cBf - -C lib . | tar -xBvf - -C ~ [root@localhost ~]# cp ~/libgplcompression* $HADOOP_HOME/lib/native/ [root@localhost ~]# cp target/hadoop-lzo-0.4.18-SNAPSHOT.jar $HADOOP_HOME/share/hadoop/common/

其實在tar -cBf – -C lib . | tar -xBvf – -C ~命令之后,會在~目錄下生成一下幾個文件
其中libgplcompression.so和libgplcompression.so.0是鏈接文件,指向libgplcompression.so.0.0.0
[root@localhost ~]# ls -l 1-rw-r--r--  1 libgplcompression.a 2-rw-r--r--  1 libgplcompression.la 3lrwxrwxrwx 1 libgplcompression.so -> libgplcompression.so.0.0.0 4lrwxrwxrwx 1 libgplcompression.so.0 -> libgplcompression.so.0.0.0
5-rwxr-xr-x  1 libgplcompression.so.0.0.0

剛生成的libgplcompression*和target/hadoop-lzo-0.4.18-SNAPSHOT.jar同步到集群中的所有機器對應的目錄。

 

scp -r hadoop-lzo-0.4.21-SNAPSHOT.jar root@node02:$HADOOP_HOME/share/hadoop/common/
scp -r libgplcompression* root@node03:$HADOOP_HOME/lib/native/

 

配置hadoop環境變量

1、在Hadoop中的$HADOOP_HOME/etc/hadoop/hadoop-env.sh加上下面配置

export LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib

 

2、在$HADOOP_HOME/etc/hadoop/core-site.xml加上如下配置

<property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec</value>
</property>
<property>
    <name>io.compression.codec.lzo.class</name>
    <value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
3、在$HADOOP_HOME/etc/hadoop/mapred-site.xml加上如下配置
<property>
    <name>mapred.compress.map.output</name>
    <value>true</value>
</property>

<property>
    <name>mapred.map.output.compression.codec</name>
    <value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

<property>
    <name>mapred.child.env</name>
    <value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>
</property>

4.將剛剛修改的配置文件全部同步到集群的所有機器上,並重啟Hadoop集群,這樣就可以在Hadoop中使用lzo

 

驗證lzo(通過hive測試)

創建lzo表

CREATE TABLE lzo (   ip STRING,   user STRING,   time STRING,   request STRING,   status STRING,   size STRING,   rt STRING,   referer STRING,   agent STRING,   forwarded String ) partitioned by (   date string,   host string ) row format delimited fields terminated by '\t' STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat" OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";

導入數據

 

LOAD DATA Local INPATH '/home/hadoop/data/access_20151230_25.log.lzo' INTO TABLE lzo PARTITION(date=20151229,host=25);

/home/hadoop/data/access_20151219.log文件的格式如下:

 

xxx.xxx.xx.xxx  -       [23/Dec/2015:23:22:38 +0800]    "GET /ClientGetResourceDetail.action?id=318880&token=Ocm HTTP/1.1"   200     199     0.008   "xxx.com"        "Android4.1.2/LENOVO/Lenovo A706/ch_lenovo/80"   
"-"

 直接采用lzop  /home/hadoop/data/access_20151219.log即可生成lzo格式壓縮文件/home/hadoop/data/access_20151219.log.lzo

 

索引LZO文件

1. 批量lzo文件修改

$HADOOP_HOME/bin/hadoop jar /home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar com.hadoop.compression.lzo.DistributedLzoIndexer /user/hive/warehouse/lzo

2. 單個lzo文件修改

$HADOOP_HOME/bin/hadoop jar /home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar com.hadoop.compression.lzo.LzoIndexer /user/hive/warehouse/lzo/20151228/lzo_test_20151228.lzo
利用hive執行mr任務
set hive.exec.reducers.max=10; set mapred.reduce.tasks=10; select ip,rt from nginx_lzo limit 10; 在hive的控制台能看到類似如下格式輸出,就表示正確了! hive> set hive.exec.reducers.max=10; hive> set mapred.reduce.tasks=10; hive> select ip,rt from lzo limit 10;
Total MapReduce jobs
= 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1388065803340_0009, Tracking URL = http://mycluster:8088/proxy/application_1388065803340_0009/ Kill Command = /home/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1388065803340_0009 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2013-12-27 09:13:39,163 Stage-1 map = 0%, reduce = 0% 2013-12-27 09:13:45,343 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.22 sec 2013-12-27 09:13:46,369 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.22 sec MapReduce Total cumulative CPU time: 1 seconds 220 msec Ended Job = job_1388065803340_0009 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 1.22 sec HDFS Read: 63570 HDFS Write: 315 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 220 msec OK xxx.xxx.xx.xxx "XXX.com" Time taken: 17.498 seconds, Fetched: 10 row(s)

修改使用中hive表的輸入輸出格式

ALTER TABLE lzo SET FILEFORMAT  INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat' OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat" SERDE "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe";

 

 

 

 

 
       


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM