日志采集框架Flume
Flume介紹
-
概述
Flume是一個分布式、可靠、和高可用的海量日志采集、聚合和傳輸的系統。
Flume可以采集文件,socket數據包、文件、文件夾、kafka等各種形式源數據,又可以將采集到的數據(下沉sink)輸出到HDFS、hbase、hive、kafka等眾多外部存儲系統中
-
運行機制
Flume分布式系統最核心的角色是agent,flume采集系統就是由一個個agent所連接起來而成
每一個agent相當於一個數據傳遞員,內部有三個組件:
- Source:采集組件,用於跟數據源對接,獲取數據
- Sink:下沉組件,用於往下一級agent傳遞數據或者往最終存儲系統傳遞數據
- Channel:傳輸通道組件,用於從source將數據傳遞到sink
-
采集系統結構圖
- 簡單結構
-
復雜結構
多級agent之間串聯
Flume實戰案例
安裝部署
-
第一步:下載解壓修改配置文件
Flume的安裝非常簡單,只需要解壓即可,當然,前提是已有hadoop環境
# 上傳安裝包到數據源所在節點上 這里采用在第三台機器來進行安裝 軟件目錄 => flume-ng-1.6.0-cdh5.14.0.tar.gz tar -zxvf flume-ng-1.6.0-cdh5.14.0.tar.gz -C ../servers/ cd ../servers/apache-flume-1.6.0-cdh5.14.0-bin/conf/ cp flume-env.sh.template flume-env.sh vim flume-env.sh #只添加一個java環境就可以了 export JAVA_HOME=/export/servers/jdk1.8.0_141
-
第二步:開發配置文件
# 根據數據采集的需求配置采集方案,描述在配置文件中(文件名可任意自定義) # 配置我們的網絡收集的配置文件 # 在flume的conf目錄下新建一個配置文件(采集方案) vim /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf/netcat-logger.conf # 定義這個agent中各組件的名字 a1.sources = r1 a1.sinks = k1 a1.channels = c1 # 描述和配置source組件:r1 a1.sources.r1.type = netcat a1.sources.r1.bind = 192.168.52.120 a1.sources.r1.port = 44444 # 描述和配置sink組件:k1 a1.sinks.k1.type = logger # 描述和配置channel組件,此處使用是內存緩存的方式 a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # 描述和配置source channel sink之間的連接關系 a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
-
啟動配置文件
指定采集方案配置文件,在相應的節點上啟動flume agent
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -c conf -f conf/netcat-logger.conf -n a1 -Dflume.root.logger=INFO,console # -c conf 指定flume自身的配置文件所在目錄 # -f conf/netcat-logger.conf 指定所描述的采集方案 # -n a1 指定這個agent的名字
-
安裝telent准備測試
在node02上安裝telnet客戶端用於模擬數據的發送
yum -y install telnet telnet node03 44444 # 使用telnet模擬數據發送
采集案例
采集目錄到HDFS
某服務器的特定目錄下會不斷產生新的文件,每當有新文件出現,就需要把文件采集到HDFS中去
-
根據需求,首先定義以下3大要素
- 數據源組件,即source -- 監控文件目錄:spooldirspooldir特性:
- 監視一個目錄,只要目錄中出現新文件,就會采集文件中的內容
- 采集完成的文件,會被agent自動添加一個后綴:COMPLETED
- 所監視的目錄中不允許重復出現相同文件名的文件
- 下沉組件,媽sink -- HDFS文件系統:hdfs sink
- 通道組件,媽channel -- 可用file channel 也可以用內存 memory channel
- 數據源組件,即source -- 監控文件目錄:spooldirspooldir特性:
-
flume配置文件開發
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf mkdir -p /export/servers/dirfile vim spooldir.conf # 定義agent的組件名字 a1.sources=sr1 a1.sinks=sk1 a1.channels=scn1 # 配置數據源source a1.sources.sr1.type=spooldir a1.sources.sr1.spoolDir=/export/servers/dirfile a1.sources.sr1.fileHeader=true # 配置下沉組件sink a1.sinks.sk1.type=hdfs a1.sinks.sk1.channel=scn1 # hdfs目錄路徑 a1.sinks.sk1.hdfs.path=hdfs://node01:8020/spooldir/files/%y-%m-%d/%H%M/ # 寫入hdfs的文件名前綴 可以使用flume提供的日期及%{host}表達式 a1.sinks.sk1.hdfs.filePrefix=events- # 表示到了需要觸發的時間時,是否要更新文件夾,true:表示要 a1.sinks.sk1.hdfs.round=true # 表示每隔value分鍾改變一次(在0~24之間) a1.sinks.sk1.hdfs.roundValue=10 # 切換文件的時候的時間單位是分鍾 a1.sinks.sk1.hdfs.roundUnit=minute # 多久時間后close hdfs文件。單位是秒,默認30秒。設置為0的話表示不根據時間close hdfs文件 a1.sinks.sk1.hdfs.rollInterval=3 # 文件大小超過一定值后,close文件。默認值1024,單位是字節。設置為0的話表示不基於文件大小,134217728表 示128m,決定了多大塊可以切一個文件。 a1.sinks.sk1.hdfs.rollSize=134217728 # 寫入了多少個事件后close文件。默認值是10個。設置為0的話表示不基於事件個數 a1.sinks.sk1.hdfs.rollCount=0 # 批次數,HDFS Sink每次從Channel中拿的事件個數。默認值100 a1.sinks.sk1.hdfs.batchSize=100 # 使用本地時間戳 a1.sinks.sk1.hdfs.useLocalTimeStamp=true #生成的文件類型默認是 Sequencefile,可用DataStream則為普通文本 a1.sinks.sk1.hdfs.fileType=DataStream # 配置通道channel a1.channels.scn1.type=memory a1.channels.scn1.capacity=1000 a1.channels.scn1.transactionCapacity=100 bin/flume-ng agent -c ./conf/ -f ./conf/spooldir.conf -n a1 -Dflume.root.logger=INFO,console # 運行flume
采集文件到HDFS
比如業務系統使用Log4j生成的日志,日志內容不斷增加,需要把追加到日志文件中的數據實時采集到hdfs
-
根據需求,首先定義以下3大要素
- 采集源,即source——監控文件內容更新 : exec ‘tail -F file’
- 下沉目標,即sink——HDFS文件系統 : hdfs sink
- Source和sink之間的傳遞通道——channel,可用filechannel 也可以用 內存channel
-
定義flume的配置文件
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim tail-file.conf agent1.sources = source1 agent1.sinks = sink1 agent1.channels = channel1 # Describe/configure tail -F source1 agent1.sources.source1.type = exec agent1.sources.source1.command = tail -F /export/servers/taillogs/access_log agent1.sources.source1.channels = channel1 #configure host for source #agent1.sources.source1.interceptors = i1 #agent1.sources.source1.interceptors.i1.type = host #agent1.sources.source1.interceptors.i1.hostHeader = hostname # Describe sink1 agent1.sinks.sink1.type = hdfs #a1.sinks.k1.channel = c1 agent1.sinks.sink1.hdfs.path = hdfs://node01:8020/weblog/flume-collection/%y-%m-%d/%H-%M agent1.sinks.sink1.hdfs.filePrefix = access_log agent1.sinks.sink1.hdfs.maxOpenFiles = 5000 agent1.sinks.sink1.hdfs.batchSize= 100 agent1.sinks.sink1.hdfs.fileType = DataStream agent1.sinks.sink1.hdfs.writeFormat =Text agent1.sinks.sink1.hdfs.rollSize = 102400 agent1.sinks.sink1.hdfs.rollCount = 1000000 agent1.sinks.sink1.hdfs.rollInterval = 60 agent1.sinks.sink1.hdfs.round = true agent1.sinks.sink1.hdfs.roundValue = 10 agent1.sinks.sink1.hdfs.roundUnit = minute agent1.sinks.sink1.hdfs.useLocalTimeStamp = true # Use a channel which buffers events in memory agent1.channels.channel1.type = memory agent1.channels.channel1.keep-alive = 120 agent1.channels.channel1.capacity = 500000 agent1.channels.channel1.transactionCapacity = 600 # Bind the source and sink to the channel agent1.sources.source1.channels = channel1 agent1.sinks.sink1.channel = channel1 bin/flume-ng agent -c conf -f conf/tail-file.conf -n agent1 -Dflume.root.logger=INFO,console #啟動Flume # 開發shell腳本定時追加文件內容 mkdir -p /export/servers/shells/ cd /export/servers/shells/ vim tail-file.sh #!/bin/bash while true do date >> /export/servers/taillogs/access_log;
兩個agent級聯
第一個agent負責收集文件當中的數據,通過網絡發送到第二個agent當中去,第二個agent負責接收第一個agent發送的數據,並將數據保存到hdfs上面去
第一步:node02安裝flume
cd /export/servers
scp -r apache-flume-1.6.0-cdh5.14.0-bin/ node02:$PWD
第二步:node02配置flume配置文件
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf
vim tail-avro-avro-logger.conf
##################
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /export/servers/taillogs/access_log
a1.sources.r1.channels = c1
# Describe the sink
##sink端的avro是一個數據發送者
a1.sinks = k1
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = 192.168.52.120
a1.sinks.k1.port = 4141
a1.sinks.k1.batch-size = 10
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
第三步:node02開發腳本文件往文件寫入數據
# 直接把node03的腳本拷貝至node02
cd /export/servers
scp -r shells/ taillogs/ node02:$PWD
第四步node03開發Flume配置文件
# 在node03機器上開發flume的配置文件
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf
vim avro-hdfs.conf #配置如下
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
##source中的avro組件是一個接收者服務
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 192.168.52.120
a1.sources.r1.port = 4141
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://node01:8020/avro/hdfs/%y-%m-%d/%H%M/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollInterval = 3
a1.sinks.k1.hdfs.rollSize = 20
a1.sinks.k1.hdfs.rollCount = 5
a1.sinks.k1.hdfs.batchSize = 1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件類型,默認是Sequencefile,可用DataStream,則為普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
第五步順序啟動
# node03機器啟動flume進程
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin
bin/flume-ng agent -c conf -f conf/avro-hdfs.conf -n a1 -Dflume.root.logger=INFO,console
# node02機器啟動flume進程
cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/
bin/flume-ng agent -c conf -f conf/tail-avro-avro-logger.conf -n a1 -Dflume.root.logger=INFO,console
# node02機器啟shell腳本生成文件
cd /export/servers/shells
sh tail-file.sh
更多source和sink組件
參見:http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.14.0/FlumeUserGuide.html
高可用Flume-NG配置案例failover
-
角色分配
名稱 HOST 角色 Agent1 node01 Web Server Collector1 node02 AgentMstr1 Collector2 node03 AgentMstr2 -
node01安裝配置flume
# node03機器執行以下命令 cd /export/servers scp -r apache-flume-1.6.0-cdh5.14.0-bin/ node01:$PWD scp -r shells/ taillogs/ node01:$PWD # node01機器配置agent的配置文件 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim agent.conf #配置如下 #agent1 name agent1.channels = c1 agent1.sources = r1 agent1.sinks = k1 k2 # ##set gruop agent1.sinkgroups = g1 # ##set channel agent1.channels.c1.type = memory agent1.channels.c1.capacity = 1000 agent1.channels.c1.transactionCapacity = 100 # agent1.sources.r1.channels = c1 agent1.sources.r1.type = exec agent1.sources.r1.command = tail -F /export/servers/taillogs/access_log # agent1.sources.r1.interceptors = i1 i2 agent1.sources.r1.interceptors.i1.type = static agent1.sources.r1.interceptors.i1.key = Type agent1.sources.r1.interceptors.i1.value = LOGIN agent1.sources.r1.interceptors.i2.type = timestamp # ## set sink1 agent1.sinks.k1.channel = c1 agent1.sinks.k1.type = avro agent1.sinks.k1.hostname = node02 agent1.sinks.k1.port = 52020 # ## set sink2 agent1.sinks.k2.channel = c1 agent1.sinks.k2.type = avro agent1.sinks.k2.hostname = node03 agent1.sinks.k2.port = 52020 # ##set sink group agent1.sinkgroups.g1.sinks = k1 k2 # ##set failover agent1.sinkgroups.g1.processor.type = failover agent1.sinkgroups.g1.processor.priority.k1 = 10 agent1.sinkgroups.g1.processor.priority.k2 = 1 agent1.sinkgroups.g1.processor.maxpenalty = 10000 #
-
node02與node03配置flumecollection
# node02機器修改配置文件 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim collector.conf #set Agent name a1.sources = r1 a1.channels = c1 a1.sinks = k1 # ##set channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # ## other node,nna to nns a1.sources.r1.type = avro a1.sources.r1.bind = node02 a1.sources.r1.port = 52020 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = static a1.sources.r1.interceptors.i1.key = Collector a1.sources.r1.interceptors.i1.value = node02 a1.sources.r1.channels = c1 # ##set sink to hdfs a1.sinks.k1.type=hdfs a1.sinks.k1.hdfs.path= hdfs://node01:8020/flume/failover/ a1.sinks.k1.hdfs.fileType=DataStream a1.sinks.k1.hdfs.writeFormat=TEXT a1.sinks.k1.hdfs.rollInterval=10 a1.sinks.k1.channel=c1 a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d # node03機器修改配置文件 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim collector.conf #set Agent name a1.sources = r1 a1.channels = c1 a1.sinks = k1 # ##set channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # ## other node,nna to nns a1.sources.r1.type = avro a1.sources.r1.bind = node03 a1.sources.r1.port = 52020 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = static a1.sources.r1.interceptors.i1.key = Collector a1.sources.r1.interceptors.i1.value = node03 a1.sources.r1.channels = c1 # ##set sink to hdfs a1.sinks.k1.type=hdfs a1.sinks.k1.hdfs.path= hdfs://node01:8020/flume/failover/ a1.sinks.k1.hdfs.fileType=DataStream a1.sinks.k1.hdfs.writeFormat=TEXT a1.sinks.k1.hdfs.rollInterval=10 a1.sinks.k1.channel=c1 a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d
-
順序啟動命令
# node03機器上面啟動flume cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n a1 -c conf -f conf/collector.conf -Dflume.root.logger=DEBUG,console # node02機器上面啟動flume cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n a1 -c conf -f conf/collector.conf -Dflume.root.logger=DEBUG,console # node01機器上面啟動flume cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n agent1 -c conf -f conf/agent.conf -Dflume.root.logger=DEBUG,console # node01機器啟動文件產生腳本 cd /export/servers/shells sh tail-file.sh
-
FAILOVER測試
- Collector1宕機,Collector2獲取優先上傳權限
- 重啟Collector1服務,Collector1重新獲得優先上傳的權限
Flume的負載均衡 load balancer
負載均衡是用於解決一台機器(一個進程)無法解決所有請求而產生的一種算法。Load balancing Sink Processor 能夠實現 load balance 功能,如下圖Agent1 是一個路由節點,負責將
Channel 暫存的 Event 均衡到對應的多個 Sink組件上,而每個 Sink 組件分別連接到一個獨立的 Agent 上,示例配置,如下所示:
在此處我們通過三台機器來進行模擬flume的負載均衡
三台機器規划如下:
node01:采集數據,發送到node02和node03機器上去
node02:接收node01的部分數據
node03:接收node01的部分數據
-
第一步:開發node01服務器的flume配置
# node01服務器配置: cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim load_banlancer_client.conf #agent name a1.channels = c1 a1.sources = r1 a1.sinks = k1 k2 #set gruop a1.sinkgroups = g1 #set channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.sources.r1.channels = c1 a1.sources.r1.type = exec a1.sources.r1.command = tail -F /export/servers/taillogs/access_log # set sink1 a1.sinks.k1.channel = c1 a1.sinks.k1.type = avro a1.sinks.k1.hostname = node02 a1.sinks.k1.port = 52020 # set sink2 a1.sinks.k2.channel = c1 a1.sinks.k2.type = avro a1.sinks.k2.hostname = node03 a1.sinks.k2.port = 52020 #set sink group a1.sinkgroups.g1.sinks = k1 k2 #set failover a1.sinkgroups.g1.processor.type = load_balance a1.sinkgroups.g1.processor.backoff = true a1.sinkgroups.g1.processor.selector = round_robin a1.sinkgroups.g1.processor.selector.maxTimeOut=10000
-
第二步:開發node02服務器的flume配置
# node02服務器配置: cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim load_banlancer_server.conf # Name the components on this agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = avro a1.sources.r1.channels = c1 a1.sources.r1.bind = node02 a1.sources.r1.port = 52020 # Describe the sink a1.sinks.k1.type = logger # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
-
第三步:開發node03服務器flume配置
# node03服務器配置 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim load_banlancer_server.conf # Name the components on this agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = avro a1.sources.r1.channels = c1 a1.sources.r1.bind = node03 a1.sources.r1.port = 52020 # Describe the sink a1.sinks.k1.type = logger # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
-
第四步:准備啟動flume服務
# 啟動node03的flume服務 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n a1 -c conf -f conf/load_banlancer_server.conf -Dflume.root.logger=DEBUG,console # 啟動node02的flume服務 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n a1 -c conf -f conf/load_banlancer_server.conf -Dflume.root.logger=DEBUG,console # 啟動node01的flume服務 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -n a1 -c conf -f conf/load_banlancer_client.conf -Dflume.root.logger=DEBUG,console # node01服務器運行腳本產生數據 cd /export/servers/shells sh tail-file.sh
Flume案例一
把A、B 機器中的access.log、nginx.log、web.log 采集匯總到C機器上然后統一收集到hdfs中。
但是在hdfs中要求的目錄為:
/source/logs/access/20180101/**
/source/logs/nginx/20180101/**
/source/logs/web/20180101/**
-
采集端配置文件開發
# node01與node02服務器開發flume的配置文件 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim exec_source_avro_sink.conf # Name the components on this agent a1.sources = r1 r2 r3 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = exec a1.sources.r1.command = tail -F /export/servers/taillogs/access.log a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = static ## static攔截器的功能就是往采集到的數據的header中插入自己定## 義的key-value對 a1.sources.r1.interceptors.i1.key = type a1.sources.r1.interceptors.i1.value = access a1.sources.r2.type = exec a1.sources.r2.command = tail -F /export/servers/taillogs/nginx.log a1.sources.r2.interceptors = i2 a1.sources.r2.interceptors.i2.type = static a1.sources.r2.interceptors.i2.key = type a1.sources.r2.interceptors.i2.value = nginx a1.sources.r3.type = exec a1.sources.r3.command = tail -F /export/servers/taillogs/web.log a1.sources.r3.interceptors = i3 a1.sources.r3.interceptors.i3.type = static a1.sources.r3.interceptors.i3.key = type a1.sources.r3.interceptors.i3.value = web # Describe the sink a1.sinks.k1.type = avro a1.sinks.k1.hostname = node03 a1.sinks.k1.port = 41414 # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 20000 a1.channels.c1.transactionCapacity = 10000 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sources.r2.channels = c1 a1.sources.r3.channels = c1 a1.sinks.k1.channel = c1
-
服務端配置文件開發
# 在node03上面開發flume配置文件 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin/conf vim avro_source_hdfs_sink.conf a1.sources = r1 a1.sinks = k1 a1.channels = c1 #定義source a1.sources.r1.type = avro a1.sources.r1.bind = 192.168.52.120 a1.sources.r1.port =41414 #添加時間攔截器 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder #定義channels a1.channels.c1.type = memory a1.channels.c1.capacity = 20000 a1.channels.c1.transactionCapacity = 10000 #定義sink a1.sinks.k1.type = hdfs a1.sinks.k1.hdfs.path=hdfs://192.168.52.100:8020/source/logs/%{type}/%Y%m%d a1.sinks.k1.hdfs.filePrefix =events a1.sinks.k1.hdfs.fileType = DataStream a1.sinks.k1.hdfs.writeFormat = Text #時間類型 a1.sinks.k1.hdfs.useLocalTimeStamp = true #生成的文件不按條數生成 a1.sinks.k1.hdfs.rollCount = 0 #生成的文件按時間生成 a1.sinks.k1.hdfs.rollInterval = 30 #生成的文件按大小生成 a1.sinks.k1.hdfs.rollSize = 10485760 #批量寫入hdfs的個數 a1.sinks.k1.hdfs.batchSize = 10000 #flume操作hdfs的線程數(包括新建,寫入等) a1.sinks.k1.hdfs.threadsPoolSize=10 #操作hdfs超時時間 a1.sinks.k1.hdfs.callTimeout=30000 #組裝source、channel、sink a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
-
采集端文件生成腳本
cd /export/servers/shells vim server.sh #!/bin/bash while true do date >> /export/servers/taillogs/access.log; date >> /export/servers/taillogs/web.log; date >> /export/servers/taillogs/nginx.log; sleep 0.5; done
-
順序啟動服務
# node03啟動flume實現數據收集 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -c conf -f conf/avro_source_hdfs_sink.conf -name a1 -Dflume.root.logger=DEBUG,console # node01與node02啟動flume實現數據監控 cd /export/servers/apache-flume-1.6.0-cdh5.14.0-bin bin/flume-ng agent -c conf -f conf/exec_source_avro_sink.conf -name a1 -Dflume.root.logger=DEBUG,console # node01與node02啟動生成文件腳本 cd /export/servers/shells sh server.sh