一. ELK 分布式日志實戰介紹
此實戰方案以 Elk 5.5.2 版本為准,分布式日志將以下圖分布進行安裝部署以及配置。
當Elk需監控應用日志時,需在應用部署所在的服務器中,安裝Filebeat日志采集工具,日志采集工具通過配置,采集本地日志文件,將日志消息傳輸到Kafka集群,
我們可部署日志中間服務器,安裝Logstash日志采集工具,Logstash直接消費Kafka的日志消息,並將日志數據推送到Elasticsearch中,並且通過Kibana對日志數據進行展示。
二. Elasticsearch配置
1.Elasticsearch、Kibana安裝配置,可見本人另一篇博文
https://www.cnblogs.com/woodylau/p/9474848.html
2.
創建logstash日志前,需先設置自動創建索引(根據第一步Elasticsearch、Kibana安裝成功后,點擊Kibana Devtools菜單項,輸入下文代碼執行)
PUT /_cluster/settings
{
"persistent" : { "action": { "auto_create_index": "true" } } }
三. Filebeat 插件安裝以及配置
1.下載Filebeat插件 5.5.2 版本
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.2-linux-x86_64.tar.gz
2.解壓filebeat-5.5.2-linux-x86_64.tar.gz文件至/tools/elk/目錄下
1 tar -zxvf filebeat-5.5.2-linux-x86_64.tar.gz -C /tools/elk/ 2 cd /tools/elk/ 3 mv filebeat-5.5.2-linux-x86_64 filebeat-5.5.2
3.配置filebeat.yml文件
1 cd /tools/elk/filebeat-5.5.2 2 vi filebeat.yml
4.filebeat.yml 用以下文本內容覆蓋之前文本
1 filebeat.prospectors: 2 - input_type: log 3 paths: 4 # 應用 info日志 5 - /data/applog/app.info.log 6 encoding: utf-8 7 document_type: app-info 8 #定義額外字段,方便logstash創建不同索引時所設 9 fields: 10 type: app-info 11 #logstash讀取額外字段,必須設為true 12 fields_under_root: true 13 scan_frequency: 10s 14 harvester_buffer_size: 16384 15 max_bytes: 10485760 16 tail_files: true 17 18 - input_type: log 19 paths: 20 #應用錯誤日志 21 - /data/applog/app.error.log 22 encoding: utf-8 23 document_type: app-error 24 fields: 25 type: app-error 26 fields_under_root: true 27 scan_frequency: 10s 28 harvester_buffer_size: 16384 29 max_bytes: 10485760 30 tail_files: true 31 32 # filebeat讀取日志數據錄入kafka集群 33 output.kafka: 34 enabled: true 35 hosts: ["192.168.20.21:9092","192.168.20.22:9092","192.168.20.23:9092"] 36 topic: elk-%{[type]} 37 worker: 2 38 max_retries: 3 39 bulk_max_size: 2048 40 timeout: 30s 41 broker_timeout: 10s 42 channel_buffer_size: 256 43 keep_alive: 60 44 compression: gzip 45 max_message_bytes: 1000000 46 required_acks: 1 47 client_id: beats 48 partition.hash: 49 reachable_only: true 50 logging.to_files: true
5.啟動 filebeat 日志采集工具
1 cd /tools/elk/filebeat-5.5.2 2 #后台啟動 filebeat 3 nohup ./filebeat -c ./filebeat-kafka.yml &
四. Logstash 安裝配置
1. 下載Logstash 5.5.2 版本
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.tar.gz
2.解壓logstash-5.5.2.tar.gz文件至/tools/elk/目錄下
1 tar -zxvf logstash-5.5.2.tar.gz -C /tools/elk/ 2 cd /tools/elk/ 3 mv filebeat-5.5.2-linux-x86_64 filebeat-5.5.2
3.安裝x-pack監控插件(可選插件,如若elasticsearch安裝此插件,則logstash也必須安裝)
./logstash-plugin install x-pack
4.編輯
logstash_kafka.conf 文件
1 cd /tools/elk/logstash-5.5.2/config 2 vi logstash_kafka.conf
5.配置 logstash_kafka.conf
input {
kafka {
codec => "json"
topics_pattern => "elk-.*"
bootstrap_servers => "192.168.20.21:9092,192.168.20.22:9092,192.168.20.23:9092"
auto_offset_reset => "latest"
group_id => "logstash-g1"
}
}
filter {
#當非業務字段時,無traceId則移除
if ([message] =~ "traceId=null") {
drop {}
}
}
output {
elasticsearch {
#Logstash輸出到elasticsearch
hosts => ["192.168.20.21:9200","192.168.20.22:9200","192.168.20.23:9200"]
# type為filebeat額外字段值
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
flush_size => 20000
idle_flush_time => 10
sniffing => true
template_overwrite => false
# 當elasticsearch安裝了x-pack插件,則需配置用戶名密碼
user => "elastic"
password => "elastic"
}
}
6.啟動 logstash 日志采集工具
1 cd /tools/elk/logstash-5.5.2
#后台啟動 logstash 2 nohup /tools/elk/logstash-5.5.2/bin/logstash -f /tools/elk/logstash-5.5.2/config/logstash_kafka.conf &
五. 最終查看ELK安裝配置結果
1.訪問 Kibana, http://localhost:5601,點擊 Discover菜單,配置索引表達式,輸入 logstash-*,點擊下圖藍色按鈕,則創建查看Logstash采集的應用日志