本文將介紹Flume(Spooling Directory Source) + HDFS,關於Flume 中幾種Source詳見文章 http://www.cnblogs.com/cnmenglang/p/6544081.html
1.資料准備 : apache-flume-1.7.0-bin.tar.gz
2.配置步驟:
a.上傳至用戶(LZ用戶mfz)目錄resources下
b.解壓
tar -xzvf apache-flume-1.7.0-bin.tar.gz
c.修改conf下 文件名
mv flume-conf.properties.template flume-conf.properties mv flume-env.sh.template flume-env.sh
d.修改flume-env.sh 環境變量,添加如下:
export JAVA_HOME=/usr/java/jdk1.8.0_102 FLUME_CLASSPATH="/home/mfz/hadoop-2.7.3/share/hadoop/hdfs/*"
e.新增文件 hdfs.properties
LogAgent.sources = apache LogAgent.channels = fileChannel LogAgent.sinks = HDFS #sources config #spooldir 對監控指定文件夾中新文件的變化,一旦有新文件出現就解析,解析寫入channel后完成的文件名將追加后綴為*.COMPLATE LogAgent.sources.apache.type = spooldir LogAgent.sources.apache.spoolDir = /tmp/logs LogAgent.sources.apache.channels = fileChannel LogAgent.sources.apache.fileHeader = false #sinks config LogAgent.sinks.HDFS.channel = fileChannel LogAgent.sinks.HDFS.type = hdfs LogAgent.sinks.HDFS.hdfs.path = hdfs://master:9000/data/logs/%Y-%m-%d/%H LogAgent.sinks.HDFS.hdfs.fileType = DataStream LogAgent.sinks.HDFS.hdfs.writeFormat=TEXT LogAgent.sinks.HDFS.hdfs.filePrefix = flumeHdfs LogAgent.sinks.HDFS.hdfs.batchSize = 1000 LogAgent.sinks.HDFS.hdfs.rollSize = 10240 LogAgent.sinks.HDFS.hdfs.rollCount = 0 LogAgent.sinks.HDFS.hdfs.rollInterval = 1 LogAgent.sinks.HDFS.hdfs.useLocalTimeStamp = true #channels config LogAgent.channels.fileChannel.type = memory LogAgent.channels.fileChannel.capacity =10000 LogAgent.channels.fileChannel.transactionCapacity = 100
3.啟動:
1.在 apache-flume 目錄下執行
bin/flume-ng agent --conf-file conf/hdfs.properties -c conf/ --name LogAgent -Dflume.root.logger=DEBUG,console
啟動出錯,Ctrl+C 退出,新建監控目錄/tmp/logs
mkdir -p /tmp/logs
重新啟動:
啟動成功!
4.驗證:
a.另新建一終端操作;
b.在監控目錄/tmp/logs下新建test.log目錄
vi test.log #內容 test hello world
c.保存文件后查看之前的終端輸出為
看圖可得到信息:
1.test.log 已被解析傳輸完成且名稱修改為test.log.COMPLETED;
2.HDFS目錄下生成了文件及路徑為:hdfs://master:9000/data/logs/2017-03-13/18/flumeHdfs.1489399757638.tmp
3.文件flumeHdfs.1489399757638.tmp 已被修改為flumeHdfs.1489399757638
那么接下里登錄master主機,打開WebUI,如下操作
或者打開master終端,在hadoop安裝包下執行命令
bin/hadoop fs -ls -R /data/logs/2017-03-13/18
查看文件內容,命令:
bin/hadoop fs -cat /data/logs/2017-03-13/18/flumeHdfs.1489399757638
OK,完成!