日志采集框架Flume以及Flume的安裝部署(一個分布式、可靠、和高可用的海量日志采集、聚合和傳輸的系統)


 Flume支持眾多的source和sink類型,詳細手冊可參考官方文檔,更多source和sink組件

http://flume.apache.org/FlumeUserGuide.html

Flume官網入門指南:


 1:Flume的概述和介紹:

(1):Flume是一個分布式、可靠、和高可用的海量日志采集、聚合和傳輸的系統。
(2):Flume可以采集文件,socket數據包等各種形式源數據,又可以將采集到的數據輸出到HDFS、hbase、hive、kafka等眾多外部存儲系統中
(3):一般的采集需求,通過對flume的簡單配置即可實現
(4):Flume針對特殊場景也具備良好的自定義擴展能力,因此,flume可以適用於大部分的日常數據采集場景

2:Flume的運行機制:

(1):Flume分布式系統中最核心的角色是agent,flume采集系統就是由一個個agent所連接起來形成。

(2):每一個agent相當於一個數據傳遞員,內部有三個組件:
    a):Source:采集源,用於跟數據源對接,以獲取數據。主要作用是接受客戶端發送的數據,並將數據發送到channel中,source和channel之間的關系是多對多的關系,不過一般情況下使用一個source對應多個channel。通過名稱區分不同的source。flume常用的source有:Avro Source,Thrift Source,Exec Source,Kafka Source,Netcat Source。
    b):Sink:下沉地,采集數據的傳送目的,用於往下一級agent傳遞數據或者往最終存儲系統傳遞數據。主要作用就是定義數據寫出方式,一般情況下sink從channel中獲取數據,然后將數據寫出到file,hdfs或者網絡上。channel和sink之間的關系是一對多的關系,通過不同的名稱來區分sink。Flume常用的sink有:hdfs sink,hive sink,file sink.hbase sink,avro sink,thrift sink,logger sink等等。
    c):Channel:angent內部的數據傳輸通道,用於從source將數據傳遞到sink。主要作用是提供一個數據傳輸通道,提供數據傳輸和數據存儲等功能。source將數據放到channel中,sink從channel中拿數據。通過不同的命令來區分channel。Flume常用的channel有:memory channel,jdbc channel,kafka channel,file channel等等。

注意:Source 到 Channel 到 Sink之間傳遞數據的形式是Event事件;Event事件是一個數據流單元。

 下面介紹單個Agent的fulme數據采集示意圖:

 多級agent之間串聯示意圖:

 3:Flume的安裝部署:

(1)、Flume的安裝非常簡單,只需要解壓即可,當然,前提是已有hadoop環境:
  a):上傳安裝包到數據源所在節點上,上傳過程省略。
  b):然后解壓  tar -zxvf apache-flume-1.6.0-bin.tar.gz;

    [root@master package]# tar -zxvf apache-flume-1.6.0-bin.tar.gz -C /home/hadoop/
  c):然后進入flume的目錄,修改conf下的flume-env.sh,在里面配置JAVA_HOME;(由於conf目錄下面是 flume-env.sh.template,所以我復制一個flume-env.sh,然后進行修改JAVA_HOME

    [root@master conf]# cp flume-env.sh.template flume-env.sh

    [root@master conf]# vim flume-env.sh

    然后將#注釋去掉,加上自己的JAVA_HOME:export JAVA_HOME=/home/hadoop/jdk1.7.0_65
(2)、根據數據采集的需求配置采集方案,描述在配置文件中(文件名可任意自定義);
(3)、指定采集方案配置文件,在相應的節點上啟動flume agent;

(4)、可以先用一個最簡單的例子來測試一下程序環境是否正常(在flume的conf目錄下新建一個文件);

4:部署安裝好,可以開始配置采集方案(這里是一個簡單的采集方案配置的使用,從網絡端口接收數據,然后下沉到logger), 然后需要配置一個文件,這個采集配置文件名稱,netcat-logger.conf,采集配置文件netcat-logger.conf的內容如下所示:

 1 # example.conf: A single-node Flume configuration
 2 
 3 # Name the components on this agent
 4 #定義這個agent中各組件的名字,給那三個組件sources,sinks,channels取個名字,是一個邏輯代號:
 5 #a1是agent的代表。
 6 a1.sources = r1
 7 a1.sinks = k1
 8 a1.channels = c1
 9 
10 # Describe/configure the source 描述和配置source組件:r1
11 #類型, 從網絡端口接收數據,在本機啟動, 所以localhost, type=spoolDir采集目錄源,目錄里有就采
12 #type是類型,是采集源的具體實現,這里是接受網絡端口的,netcat可以從一個網絡端口接受數據的。netcat在linux里的程序就是nc,可以學習一下。
13 #bind綁定本機localhost。port端口號為44444。
14 
15 a1.sources.r1.type = netcat
16 a1.sources.r1.bind = localhost
17 a1.sources.r1.port = 44444
18 
19 # Describe the sink 描述和配置sink組件:k1 20 #type,下沉類型,使用logger,將數據打印到屏幕上面。
21 a1.sinks.k1.type = logger
22 
23 # Use a channel which buffers events in memory 描述和配置channel組件,此處使用是內存緩存的方式
24 #type類型是內存memory。
25 #下沉的時候是一批一批的, 下沉的時候是一個個eventChannel參數解釋:
26 #capacity:默認該通道中最大的可以存儲的event數量,1000是代表1000條數據。
27 #trasactionCapacity:每次最大可以從source中拿到或者送到sink中的event數量。
28 a1.channels.c1.type = memory
29 a1.channels.c1.capacity = 1000
30 a1.channels.c1.transactionCapacity = 100
31 
32 # Bind the source and sink to the channel 描述和配置source  channel   sink之間的連接關系
33 #將sources和sinks綁定到channel上面。
34 a1.sources.r1.channels = c1
35 a1.sinks.k1.channel = c1

下面在flume的conf目錄下面編輯這個文件netcat-logger.conf:

[root@master conf]# vim netcat-logger.conf

 啟動agent去采集數據,然后可以進行啟動了,啟動命令如下所示:

bin/flume-ng agent -c conf -f conf/netcat-logger.conf -n a1  -Dflume.root.logger=INFO,console

  -c conf   指定flume自身的配置文件所在目錄

  -f conf/netcat-logger.con  指定我們所描述的采集方案

  -n a1  指定我們這個agent的名字

1 啟動命令:
2 #告訴flum啟動一個agent。
3 #--conf conf指定配置參數,。
4 #conf/netcat-logger.conf指定采集方案的那個文件(自命名)。
5 #--name a1:agent的名字,即agent的名字為a1。
6 #-Dflume.root.logger=INFO,console給log4j傳遞的參數。
7 $ bin/flume-ng agent --conf conf --conf-file conf/netcat-logger.conf --name a1 -Dflume.root.logger=INFO,console

 演示如下所示:

 啟動的信息如下所示,可以啟動到前台,也可以啟動到后台:

 1 [root@master apache-flume-1.6.0-bin]# bin/flume-ng agent --conf conf --conf-file conf/netcat-logger.conf --name a1 -Dflume.root.logger=INFO,console
 2 Info: Sourcing environment configuration script /home/hadoop/apache-flume-1.6.0-bin/conf/flume-env.sh
 3 Info: Including Hadoop libraries found via (/home/hadoop/hadoop-2.4.1/bin/hadoop) for HDFS access
 4 Info: Excluding /home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/slf4j-api-1.7.5.jar from classpath
 5 Info: Excluding /home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar from classpath
 6 Info: Including Hive libraries found via () for Hive access
 7 + exec /home/hadoop/jdk1.7.0_65/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/home/hadoop/apache-flume-1.6.0-bin/conf:/home/hadoop/apache-flume-1.6.0-bin/lib/*:/home/hadoop/hadoop-2.4.1/etc/hadoop:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/hadoop-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/common/jdiff:/home/hadoop/hadoop-2.4.1/share/hadoop/common/lib:/home/hadoop/hadoop-2.4.1/share/hadoop/common/sources:/home/hadoop/hadoop-2.4.1/share/hadoop/common/templates:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/jdiff:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/sources:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/templates:/home/hadoop/hadoop-2.4.1/share/hadoop/hdfs/webapps:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/sources:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/test:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib-examples:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/sources:/home/hadoop/hadoop-2.4.1/contrib/capacity-scheduler/*.jar:/lib/*' -Djava.library.path=:/home/hadoop/hadoop-2.4.1/lib/native org.apache.flume.node.Application --conf-file conf/netcat-logger.conf --name a1
 8 2017-12-12 19:59:37,108 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
 9 2017-12-12 19:59:37,130 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:conf/netcat-logger.conf
10 2017-12-12 19:59:37,142 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: k1 Agent: a1
11 2017-12-12 19:59:37,143 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:k1
12 2017-12-12 19:59:37,143 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:k1
13 2017-12-12 19:59:37,157 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [a1]
14 2017-12-12 19:59:37,158 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:145)] Creating channels
15 2017-12-12 19:59:37,166 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel c1 type memory
16 2017-12-12 19:59:37,172 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:200)] Created channel c1
17 2017-12-12 19:59:37,174 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)] Creating instance of source r1, type netcat
18 2017-12-12 19:59:37,189 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: k1, type: logger
19 2017-12-12 19:59:37,192 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel c1 connected to [r1, k1]
20 2017-12-12 19:59:37,200 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@1ce79b8 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
21 2017-12-12 19:59:37,210 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel c1
22 2017-12-12 19:59:37,371 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
23 2017-12-12 19:59:37,372 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: c1 started
24 2017-12-12 19:59:37,376 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink k1
25 2017-12-12 19:59:37,376 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source r1
26 2017-12-12 19:59:37,377 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:150)] Source starting
27 2017-12-12 19:59:37,513 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]

 然后可以向這個端口發送數據,就打印出來了,因為這里輸出是在console的:

 相當於產生數據的源:[root@master hadoop]# telnet localhost 44444

[root@master hadoop]# telnet localhost 44444
bash: telnet: command not found

我的機器沒有安裝telnet ,所以先安裝一下telnet ,如下所示

第一步:檢測telnet-server的rpm包是否安裝 ???
  [root@localhost ~]# rpm -qa telnet-server
  若無輸入內容,則表示沒有安裝。出於安全考慮telnet-server.rpm是默認沒有安裝的,而telnet的客戶端是標配。即下面的軟件是默認安裝的。

第二步:若未安裝,則安裝telnet-server:

   [root@localhost ~]#yum install telnet-server

第三步:3、檢測telnet的rpm包是否安裝 ???
  [root@localhost ~]# rpm -qa telnet
  telnet-0.17-47.el6_3.1.x86_64

第四步:若未安裝,則安裝telnet:

  [root@localhost ~]# yum install telnet

第五步:重新啟動xinetd守護進程???

  由於telnet服務也是由xinetd守護的,所以安裝完telnet-server,要啟動telnet服務就必須重新啟動xinetd
  [root@locahost ~]#service xinetd restart

完成以上步驟以后可以開始你的命令,如我的:

  [root@master hadoop]# telnet localhost 44444
  Trying ::1...
  telnet: connect to address ::1: Connection refused
  Trying 127.0.0.1...
  Connected to localhost.
  Escape character is '^]'.

解決完上面的錯誤以后就可以開始測試telnet數據源發送和flume的接受

測試,先要往agent采集監聽的端口上發送數據,讓agent有數據可采集隨便在一個能跟agent節點聯網的機器上:telnet localhost 44444

 

然后可以看到flume已經接受到了數據:

如何退出telnet呢???

  首先按ctrl+]退出到telnet > ,然后輸入telnet> quit即可退出,記住,quit后面不要加;

 5:flume監視文件夾案例:

 1 監視文件夾
 2 
 3 
 4 第一步:
 5 首先 在flume的conf的目錄下創建文件名稱為:vim spool-logger.conf的文件。
 6 將下面的內容復制到這個文件里面。
 7 
 8 # Name the components on this agent
 9 a1.sources = r1
10 a1.sinks = k1
11 a1.channels = c1
12 
13 # Describe/configure the source
14 #監聽目錄,spoolDir指定目錄, fileHeader要不要給文件夾前墜名
15 a1.sources.r1.type = spooldir
16 a1.sources.r1.spoolDir = /home/hadoop/flumespool
17 a1.sources.r1.fileHeader = true
18 
19 # Describe the sink
20 a1.sinks.k1.type = logger
21 
22 # Use a channel which buffers events in memory
23 a1.channels.c1.type = memory
24 a1.channels.c1.capacity = 1000
25 a1.channels.c1.transactionCapacity = 100
26 
27 # Bind the source and sink to the channel
28 a1.sources.r1.channels = c1
29 a1.sinks.k1.channel = c1
30 
31 第二步:根據a1.sources.r1.spoolDir = /home/hadoop/flumespool配置的文件路徑,創建相應的目錄。必須先創建對應的目錄,不然報錯。java.lang.IllegalStateException: Directory does not exist: /home/hadoop/flumespool 32 [root@master conf]# mkdir  /home/hadoop/flumespool
33 
34 第三步:啟動命令:  
35 bin/flume-ng agent -c ./conf -f ./conf/spool-logger.conf -n a1 -Dflume.root.logger=INFO,console
36[root@master apache-flume-1.6.0-bin]# bin/flume-ng agent --conf conf --conf-file conf/spool-logger.conf --name a1 -Dflume.root.logger=INFO,console
37 第四步:測試: 
38 往/home/hadoop/flumeSpool放文件(mv ././xxxFile /home/hadoop/flumeSpool),但是不要在里面生成文件。

6:采集目錄到HDFS案例:

(1)采集需求:某服務器的某特定目錄下,會不斷產生新的文件,每當有新文件出現,就需要把文件采集到HDFS中去
(2)根據需求,首先定義以下3大要素
  a):采集源,即source——監控文件目錄 :  spooldir
  b):下沉目標,即sink——HDFS文件系統  :  hdfs sink
  c):source和sink之間的傳遞通道——channel,可用file channel 也可以用內存channel
(3):
Channel參數解釋:

  capacity:默認該通道中最大的可以存儲的event數量;

  trasactionCapacity:每次最大可以從source中拿到或者送到sink中的event數量;

  keep-alive:event添加到通道中或者移出的允許時間;

配置文件編寫:

 1 #定義三大組件的名稱
 2 agent1.sources = source1
 3 agent1.sinks = sink1
 4 agent1.channels = channel1
 5 
 6 # 配置source組件
 7 agent1.sources.source1.type = spooldir
 8 agent1.sources.source1.spoolDir = /home/hadoop/logs/
 9 agent1.sources.source1.fileHeader = false
10 
11 #配置攔截器
12 agent1.sources.source1.interceptors = i1
13 agent1.sources.source1.interceptors.i1.type = host
14 agent1.sources.source1.interceptors.i1.hostHeader = hostname
15 
16 # 配置sink組件
17 agent1.sinks.sink1.type = hdfs
18 agent1.sinks.sink1.hdfs.path =hdfs://master:9000/weblog/flume-collection/%y-%m-%d/%H-%M
19 agent1.sinks.sink1.hdfs.filePrefix = access_log
20 agent1.sinks.sink1.hdfs.maxOpenFiles = 5000
21 agent1.sinks.sink1.hdfs.batchSize= 100
22 agent1.sinks.sink1.hdfs.fileType = DataStream
23 agent1.sinks.sink1.hdfs.writeFormat =Text
24 agent1.sinks.sink1.hdfs.rollSize = 102400
25 agent1.sinks.sink1.hdfs.rollCount = 1000000
26 agent1.sinks.sink1.hdfs.rollInterval = 60
27 #agent1.sinks.sink1.hdfs.round = true
28 #agent1.sinks.sink1.hdfs.roundValue = 10
29 #agent1.sinks.sink1.hdfs.roundUnit = minute
30 agent1.sinks.sink1.hdfs.useLocalTimeStamp = true
31 # Use a channel which buffers events in memory
32 agent1.channels.channel1.type = memory
33 agent1.channels.channel1.keep-alive = 120
34 agent1.channels.channel1.capacity = 500000
35 agent1.channels.channel1.transactionCapacity = 600
36 
37 # Bind the source and sink to the channel
38 agent1.sources.source1.channels = channel1
39 agent1.sinks.sink1.channel = channel1

 7:采集文件到HDFS案例:

(1):采集需求:比如業務系統使用log4j生成的日志,日志內容不斷增加,需要把追加到日志文件中的數據實時采集到hdfs

(2):根據需求,首先定義以下3大要素

  采集源,即source——監控文件內容更新 :  exec  ‘tail -F file’

  下沉目標,即sink——HDFS文件系統  :  hdfs sink

  Source和sink之間的傳遞通道——channel,可用file channel 也可以用 內存channel

配置文件編寫:

 1 agent1.sources = source1
 2 agent1.sinks = sink1
 3 agent1.channels = channel1
 4 
 5 # Describe/configure tail -F source1
 6 agent1.sources.source1.type = exec
 7 agent1.sources.source1.command = tail -F /home/hadoop/logs/access_log
 8 agent1.sources.source1.channels = channel1
 9 
10 #configure host for source
11 agent1.sources.source1.interceptors = i1
12 agent1.sources.source1.interceptors.i1.type = host
13 agent1.sources.source1.interceptors.i1.hostHeader = hostname
14 
15 # Describe sink1
16 agent1.sinks.sink1.type = hdfs
17 #a1.sinks.k1.channel = c1
18 agent1.sinks.sink1.hdfs.path =hdfs://master:9000/weblog/flume-collection/%y-%m-%d/%H-%M
19 agent1.sinks.sink1.hdfs.filePrefix = access_log
20 agent1.sinks.sink1.hdfs.maxOpenFiles = 5000
21 agent1.sinks.sink1.hdfs.batchSize= 100
22 agent1.sinks.sink1.hdfs.fileType = DataStream
23 agent1.sinks.sink1.hdfs.writeFormat =Text
24 agent1.sinks.sink1.hdfs.rollSize = 102400
25 agent1.sinks.sink1.hdfs.rollCount = 1000000
26 agent1.sinks.sink1.hdfs.rollInterval = 60
27 agent1.sinks.sink1.hdfs.round = true
28 agent1.sinks.sink1.hdfs.roundValue = 10
29 agent1.sinks.sink1.hdfs.roundUnit = minute
30 agent1.sinks.sink1.hdfs.useLocalTimeStamp = true
31 
32 # Use a channel which buffers events in memory
33 agent1.channels.channel1.type = memory
34 agent1.channels.channel1.keep-alive = 120
35 agent1.channels.channel1.capacity = 500000
36 agent1.channels.channel1.transactionCapacity = 600
37 
38 # Bind the source and sink to the channel
39 agent1.sources.source1.channels = channel1
40 agent1.sinks.sink1.channel = channel1

 待續......


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM