filebeat的層次架構圖和配置部署 -- 不錯的文檔 - elasticsearch 性能調優 + Filebeat配置


1.fielbeat的組件架構-看出層次感

2.工作流程:每個harvester讀取新的內容一個日志文件,新的日志數據發送到spooler(后台處理程序),它匯集的事件和聚合數據發送到你已經配置了Filebeat輸出。

參考:https://blog.csdn.net/gamer_gyt/article/details/52688636

3.安裝配置

 tar xvf filebeat-6.4.2-linux-x86_64.tar.gz
 cp /usr/local/src/filebeat-6.4.2-linux-x86_64/filebeat.yml /usr/local/src/filebeat-6.4.2-linux-x8
6_64/filebeat.yml.default
 cd /usr/local/src/filebeat-6.4.2-linux-x86_64/
 [root@VM_0_6_centos filebeat-6.4.2-linux-x86_64]# cat filebeat.yml
filebeat.inputs:
- type: log

  enabled: true

  paths:
    - /tmp/messages
  fields_under_root: true

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
output.elasticsearch:
  hosts: ["10.0.0.92:9200"]

4. 報錯

Filebeat配置檢測報 “setup.template.name and setup.template.pattern have to be set if index name is modified” 錯誤

解決方案:這個錯誤本身提示很明顯,只要我們配置了索引名格式,就必須要同時配置setup.template.name 和setup.template.pattern,但是,我配置了這兩項怎么還是不行呢,還是同樣的錯誤,重點來了:這兩項的配置必須要頂格配置,不可以和index對齊寫到一個縮進級別!這個是很容易寫錯的,大家注意!正確的寫法:
--------------------- index默認就可以了,不用配置
原文:https://blog.csdn.net/yk20091201/article/details/90756738

別人的配置文件

filebeat.inputs:
- type: log
 
  enabled: true
  paths:
    - /usr/local/analyzer/test.log
  json.keys_under_root: true
  json.add_error_key: true
  json.overwrite_keys: true
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.0.81:9200"]
  index: "filebeat-testindex-%{+yyyy.MM.dd}"
setup.template.name: "filebeattest"
setup.template.pattern: "filebeattest-*"


原文:https://blog.csdn.net/yk20091201/article/details/90756738 

 #############################################

不錯的文檔

https://www.cnblogs.com/cjsblog/p/9517060.html

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: true
  # 要抓取的文件路徑 
  paths:
    - /data/logs/oh-coupon/info.log
    - /data/logs/oh-coupon/error.log
  # 添加額外的字段
  fields:
    log_source: oh-coupon
  fields_under_root: true
  # 多行處理
  # 不以"yyyy-MM-dd"這種日期開始的行與前一行合並 
  multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2}
  multiline.negate: true
  multiline.match: after

  # 5秒鍾掃描一次以檢查文件更新
  scan_frequency: 5s
  # 如果文件1小時都沒有更新,則關閉文件句柄
  close_inactive: 1h  
  # 忽略24小時前的文件
  #ignore_older: 24h


- type: log
  enabled: true
  paths:
    - /data/logs/oh-promotion/info.log
    - /data/logs/oh-promotion/error.log
  fields:
    log_source: oh-promotion
  fields_under_root: true
  multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2}
  multiline.negate: true
  multiline.match: after
  scan_frequency: 5s
  close_inactive: 1h  
  ignore_older: 24h

#================================ Outputs =====================================

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM