1. 案例場景
A、B兩台日志服務機器實時生產日志主要類型為access.log、nginx.log、web.log
現在要求:
把A、B 機器中的access.log、nginx.log、web.log 采集匯總到C機器上然后統一收集到hdfs中。
但是在hdfs中要求的目錄為:
/source/logs/access/20160101/**
/source/logs/nginx/20160101/**
/source/logs/web/20160101/**
2. 場景分析
3. 數據流程處理分析
4. 實現
服務器A對應的IP為 192.168.200.102
服務器B對應的IP為 192.168.200.103
服務器C對應的IP為 192.168.200.101
① 在服務器A和服務器B上的$FLUME_HOME/conf 創建配置文件 exec_source_avro_sink.conf 文件內容為
exec_source_avro_sink.conf 文件內容為 # Name the components on this agent a1.sources = r1 r2 r3 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = exec a1.sources.r1.command = tail -F /root/data/access.log a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = static ## static攔截器的功能就是往采集到的數據的header中插入自己定## 義的key-value對 a1.sources.r1.interceptors.i1.key = type a1.sources.r1.interceptors.i1.value = access a1.sources.r2.type = exec a1.sources.r2.command = tail -F /root/data/nginx.log a1.sources.r2.interceptors = i2 a1.sources.r2.interceptors.i2.type = static a1.sources.r2.interceptors.i2.key = type a1.sources.r2.interceptors.i2.value = nginx a1.sources.r3.type = exec a1.sources.r3.command = tail -F /root/data/web.log a1.sources.r3.interceptors = i3 a1.sources.r3.interceptors.i3.type = static a1.sources.r3.interceptors.i3.key = type a1.sources.r3.interceptors.i3.value = web # Describe the sink a1.sinks.k1.type = avro a1.sinks.k1.hostname = 192.168.200.101 a1.sinks.k1.port = 41414 # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 20000 a1.channels.c1.transactionCapacity = 10000 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sources.r2.channels = c1 a1.sources.r3.channels = c1 a1.sinks.k1.channel = c1
② 在服務器C上的$FLUME_HOME/conf 創建配置文件 avro_source_hdfs_sink.conf 文件內容為
#定義agent名, source、channel、sink的名稱 a1.sources = r1 a1.sinks = k1 a1.channels = c1 #定義source a1.sources.r1.type = avro a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port =41414 #添加時間攔截器 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder #定義channels a1.channels.c1.type = memory a1.channels.c1.capacity = 20000 a1.channels.c1.transactionCapacity = 10000 #定義sink a1.sinks.k1.type = hdfs a1.sinks.k1.hdfs.path=hdfs://192.168.200.101:9000/source/logs/%{type}/%Y%m%d a1.sinks.k1.hdfs.filePrefix =events a1.sinks.k1.hdfs.fileType = DataStream a1.sinks.k1.hdfs.writeFormat = Text #時間類型 a1.sinks.k1.hdfs.useLocalTimeStamp = true #生成的文件不按條數生成 a1.sinks.k1.hdfs.rollCount = 0 #生成的文件按時間生成 a1.sinks.k1.hdfs.rollInterval = 30 #生成的文件按大小生成 a1.sinks.k1.hdfs.rollSize = 10485760 #批量寫入hdfs的個數 a1.sinks.k1.hdfs.batchSize = 10000 flume操作hdfs的線程數(包括新建,寫入等) a1.sinks.k1.hdfs.threadsPoolSize=10 #操作hdfs超時時間 a1.sinks.k1.hdfs.callTimeout=30000 #組裝source、channel、sink a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
③ 配置完成之后,在服務器A和B上的/root/data有數據文件access.log、nginx.log、web.log。先啟動服務器C上的flume,啟動命令
在flume安裝目錄下執行 :
bin/flume-ng agent -c conf -f conf/avro_source_hdfs_sink.conf -name a1 -Dflume.root.logger=DEBUG,console
然后在啟動服務器上的A和B,啟動命令
在flume安裝目錄下執行 :
bin/flume-ng agent -c conf -f conf/exec_source_avro_sink.conf -name a1 -Dflume.root.logger=DEBUG,console
5. 項目實現截圖