使用filebeat替代logstash收集日志


一 簡介:

    Filebeat是輕量級單用途的日志收集工具,用於在沒有安裝java的服務器上專門收集日志,可以將日志轉發到logstash、elasticsearch或redis等場景中進行下一步處理。

    流程:Filebeat收集日志發送到logstash  ===> logstash收到日志寫入redis或者kafka ===> logstash收集redis或者kafka日志寫入到elk

二 filebeat收集日志

1.1.1 安裝filebeat

下載地址:https://artifacts.elastic.co/downloads/beats/filebeat/#解壓

yum -y install filebeat-5.6.5-x86_64.rpm
#編輯配置文件
vim /etc/filebeat/filebeat.yml 
  paths:                                     #增加日志收集路徑收集系統日志
    - /var/log/*.log
    - /var/log/messages
  exclude_lines: ["^DBG"]                   #以什么開頭的不收集
  #include_lines: ["^ERR", "^WARN"]         #只收集以什么開頭的
  exclude_files: [".gz$"]                   #.gz結尾不收集
  document_type: "system-log-dev-filebeat"  #增加一個type

#日志收集輸出到文件  做測試用
output.file:
  path: "/tmp"
  filename: "filebeat.txt"

- input_type: log               #收集Nginx日志
  paths:
    - /var/log/nginx/access_json.log   #日志收集路徑
  exclude_lines: ["^DBG"]
  exclude_files: [".gz$"]
  document_type: "nginx-log-dev-filebeat"  #定義type


#日志收集寫入到logstash output.logstash: hosts: ["192.168.10.167:5400"] #logstash 服務器地址可寫入多個 enabled: true #是否開啟輸出到logstash 默認開啟 worker: 1 #進程數 compression_level: 3 #壓縮級別 #loadbalance: true #多個輸出的時候開啟負載

1.1.2 重啟並驗證

[root@localhost tmp]# systemctl restart filebeat.service 
[root@localhost tmp]# ls filebeat.txt
filebeat.txt
[root@DNS-Server tools]# /tools/kafka/bin/kafka-topics.sh --list  --zookeeper 192.168.10.10:2181,192.168.10.167:2181,192.168.10.171:2181
__consumer_offsets
nginx-access-kafkaceshi
nginx-accesslog-kafka-test

二 寫入kafka並驗證

[root@DNS-Server ~]# cat /etc/logstash/conf.d/filebeat.conf 
input {
    beats {
      port => "5400"                               #filebate使用的端口
      codec => "json"
    }
}
output {
  if [type] == "system-log-dev-filebeat" {         #fulebate定義的type
    kafka {
      bootstrap_servers => "192.168.10.10:9092"
      topic_id => "system-log-filebe-dev"          #定義kafka主題
      codec => "json"
    }
}

  if [type] == "nginx-log-dev-filebeat" {
    kafka {
      bootstrap_servers => "192.168.10.10:9092"
      topic_id => "nginx-log-filebe-dev"
      codec => "json"
    }
}
}

[root@DNS-Server ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK


[root@DNS-Server ~]# /tools/kafka/bin/kafka-topics.sh --list  --zookeeper 192.168.10.10:2181,192.168.10.167:2181,192.168.10.171:2181
__consumer_offsets
nginx-access-kafkaceshi
nginx-accesslog-kafka-test
nginx-log-filebe-dev
system-log-filebe-dev
[root@DNS-Server ~]# systemctl restart logstash.service

三 logstash寫入elk

3.1.1 編寫配置文件並驗證重啟

[root@DNS-Server ~]# cat /etc/logstash/conf.d/filebeat_elk.conf 
input {
    kafka {
      bootstrap_servers => "192.168.10.10:9092"
      topics => "system-log-filebe-dev"                    #kafka的主題
      group_id => "system-log-filebeat"
      codec => "json"
      consumer_threads => 1
      decorate_events => true
    } 
    kafka {
      bootstrap_servers => "192.168.10.10:9092"
      topics => "nginx-log-filebe-dev"                    
      group_id => "nginx-log-filebeat"
      codec => "json"
      consumer_threads => 1
      decorate_events => true
    }

}

output {
  if [type] == "system-log-dev-filebeat"{
    elasticsearch {
      hosts => ["192.168.10.10:9200"]
      index=> "systemlog-filebeat-dev-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "nginx-log-dev-filebeat"{                     #filebeat定義的type類型
    elasticsearch {
      hosts => ["192.168.10.10:9200"]
      index=> "logstash-nginxlog-filebeatdev-%{+YYYY.MM.dd}"
    }
  }

}
[root@DNS-Server ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat_elk.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
[root@DNS-Server ~]# systemctl restart logstash.service

3.1.2 elasticsearch-head驗證

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM