分布式日志收集框架Flume


分布式日志收集框架Flume

1.業務現狀分析

  • WebServer/ApplicationServer分散在各個機器上

  • 想在大數據平台Hadoop進行統計分析

  • 日志如何收集到Hadoop平台上

  • 解決方案及存在的問題

  • 如何解決我們的數據從其他的server上移動到Hadoop之上?

    1. shell: cp --> Hadoop集群的機器上,hdfs dfs -put ....(有很多問題不好解決,容錯、負載均衡、時效性、壓縮)
    2. Flume,從 A --> B 移動日志

2.Flume概述

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.
Flume是由Apache基金會提供的一個分布式、高可靠、高可用的服務,用於分布式的海量日志的高效收集、聚合、移動系統。

  • Flume設計目標

    1. 可靠性:高科要
    2. 擴展性:模塊可擴展
    3. 管理性:agent管理
  • 界同類產品對比

    1. Flume: Cloudera/Apache, Java語言開發。
    2. Logstash: ELK(ElasticsSearch, Logstash, Kibana)
    3. Scribe: Facebook, 使用C/C++開發, 負載均衡不是很好, 已經不維護了。
    4. Chukwa: Yahoo/Apache, 使用Java語言開發, 負載均衡不是很好, 已經不維護了。
    5. Fluentd: 和Flume類似, Ruby開發。
  • Flume發展史

    1. Cloudera公司提出0.9.2,叫Flume-OG
    2. 2011年Flume-728編號,重要里程碑(Flume-NG),貢獻給Apache社區
    3. 2012年7月 1.0版本
    4. 2015年5月 1.6版本
    5. ~ 1.7版本

3.Flume架構及核心組件

Flume有三大組件

  • Source: 收集,指定數據源從哪里來(Avro, Thrift, Spooling, Kafka, Exec)
  • Channel: 聚集,把數據先存在(Memory, File, Kafka等用的比較多)
  • Sink: 把數據寫到某個地方去(HDFS, Hive, Logger, Avro, Thrift, File, ES, HBase, Kafka等)

4.Flume環境部署

  • 前置條件
    • Java Runtime Environment - Java 1.8 or later(安裝Java)
    • Memory - Sufficient memory for configurations used by sources, channels or sinks(足夠內存)
    • Disk Space - Sufficient disk space for configurations used by channels or sinks(足夠空間)
    • Directory Permissions - Read/Write permissions for directories used by agent(讀寫權限)
  • 1.安裝JDK(下載,解壓,安裝,配置環境變量)
  • 2.安裝Flume(下載,加壓,安裝,配置環境變量,檢測:flume-ng version)

5.Flume實戰

  • 需求1:從指定網絡端口采集數據輸出到控制台

    • flume-conf.properties
      • A) 配置Source
      • B) 配置Channel
      • C) 配置Sink
      • D) 把以上三個組件串起來
    # example.conf: A single-node Flume configuration
    
    # a1: agent名稱
    # r1:source的名稱
    # k1:sink的名稱
    # c1:channel的名稱
    
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    # Describe/configure the source
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = localhost
    a1.sources.r1.port = 44444
    
    # Describe the sink
    a1.sinks.k1.type = logger
    
    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100
    
    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    
    • 啟動Agent
    flume-ng agent \
    --name $agent_name \
    --conf conf \
    --conf-file conf/flume-conf.properties \
    -Dflume.root.logger=INFO,console
    
    flume-ng agent \
    --name a1 \
    --conf $FLUME_HOME/conf \
    --conf-file $FLUME_HOME/conf/example.conf \
    -Dflume.root.logger=INFO,console
    
  • 需求2:監控一個文件實時采集新增的數據輸出到控制台

    • 1.Agent選型:exec source + memory channel + logger sink
    • 2.配置文件
    # exec-memory-logger.conf: A single-node Flume configuration
    
    # a1: agent名稱
    # r1:source的名稱
    # k1:sink的名稱
    # c1:channel的名稱
    
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /home/k.o/data/data.log
    a1.sources.r1.shell = /bin/sh -c
    
    # Describe the sink
    a1.sinks.k1.type = logger
    
    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100
    
    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    
    • 啟動Agent
    flume-ng agent \
    --name $agent_name \
    --conf conf \
    --conf-file conf/flume-conf.properties \
    -Dflume.root.logger=INFO,console
    
    flume-ng agent \
    --name a1 \
    --conf $FLUME_HOME/conf \
    --conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
    -Dflume.root.logger=INFO,console
    
  • 需求3:將A服務器上的日志實時采集到B服務器

  • 技術選型:
    1.exec source + memory channel + avro sink
    2.arro source + memory channel + logger sink
# exec-memory-avro.conf: A single-node Flume configuration

# exec-memory-avro: agent名稱
# exec-source:source的名稱
# avro-sink:sink的名稱
# memory-channel:channel的名稱

# Name the components on this agent
exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel

# Describe/configure the source
exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -F /home/k.o/data/data.log
exec-memory-avro.sources.exec-source.shell = /bin/sh -c

# Describe the sink
exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = localhost
exec-memory-avro.sinks.avro-sink.port = 44444

# Use a channel which buffers events in memory
exec-memory-avro.channels.memory-channel.type = memory
exec-memory-avro.channels.memory-channel.capacity = 1000
exec-memory-avro.channels.memory-channel.transactionCapacity = 100

# Bind the source and sink to the channel
exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel
# avro-memory-logger.conf: A single-node Flume configuration

# avro-memory-logger: agent名稱
# exec-source:source的名稱
# logger-sink:sink的名稱
# memory-channel:channel的名稱

# Name the components on this agent
avro-memory-logger.sources = avro-source
avro-memory-logger.sinks = logger-sink
avro-memory-logger.channels = memory-channel

# Describe/configure the source
avro-memory-logger.sources.avro-source.type = avro
avro-memory-logger.sources.avro-source.bind = localhost
avro-memory-logger.sources.avro-source.port = 44444

# Describe the sink
avro-memory-logger.sinks.logger-sink.type = logger

# Use a channel which buffers events in memory
avro-memory-logger.channels.memory-channel.type = memory
avro-memory-logger.channels.memory-channel.capacity = 1000
avro-memory-logger.channels.memory-channel.transactionCapacity = 100

# Bind the source and sink to the channel
avro-memory-logger.sources.avro-source.channels = memory-channel
avro-memory-logger.sinks.logger-sink.channel = memory-channel
  • 啟動Agent
# 先啟動 avro-memory-logger
flume-ng agent \
--name avro-memory-logger \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
-Dflume.root.logger=INFO,console

# 再啟動 exec-memory-avro
flume-ng agent \
--name exec-memory-avro \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
-Dflume.root.logger=INFO,console
  • 日志收集過程
    1. 機器A上監控一個文件,當我們訪問主站時會有用戶行為日志記錄到access.log鍾
    2. avro sink把新產生的日志輸出到對應的avro source指定的hostname和port上
    3. 通過avro source對應的logger將我們收集的日志輸出到控制台


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM