ELK + Filebeat 日志分析系統
架構圖

環境
OS:CentOS 7.4
Filebeat: 6.3.2
Logstash: 6.3.2
Elasticsearch 6.3.2
Kibana: 6.3.2
FileBeat安裝配置
安裝
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.2-x86_64.rpm
yum localinstall filebeat-6.3.2-x86_64.rpm
配置
這里以nginx日志為例作為演示
配置文件:/etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log #輸入類型為log
paths: #日志路徑
- /usr/local/nginx/logs/*.access.log
document_type: ngx-access-log #日志類型
- input_type: log
paths:
- /usr/local/nginx/logs/*.error.log
document_type: ngx-error-log
output.logstash: #輸出到Logstash(也可以輸出到其他,如elasticsearch)
hosts: ["10.1.4.171:1007"]
啟動
systemctl enable filebeat
systemctl start filebeat
Logstash安裝配置
安裝
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.rpm
yum localinstall logstash-6.3.2.rpm
配置
Logstash需要自定義,自定義配置文件目錄是/etc/logstash/conf.d
這里新建一個filebeat.conf配置文件
/etc/logstash/conf.d/filebeat.conf
input {
#輸入方式是beats
beats {
port => "1007" #監聽1007端口(自定義端口)
}
}
filter {
if [type] == "ngx-access-log" { #對日志類型為ngx-access-log進行處理。日志類型為filebeat配置定義
grok {
patterns_dir => "/usr/local/logstash/patterns"
match => { #對傳過來的message字段做拆分,分割成多個易讀字段
message => "%{IPV4:remote_addr}\|%{IPV4:FormaxRealIP}\|%{POSINT:server_port}\|%{GREEDYDATA:scheme}\|%{IPORHOST:http_host}\|%{HTTPDATE:time_local}\|%{HTTPMETHOD:request_method}\|%{URIPATHPARAM:request_uri}\|%{GREEDYDATA:server_protocol}\|%{NUMBER:status}\|%{NUMBER:body_bytes_sent}\|%{GREEDYDATA:http_referer}\|%{GREEDYDATA:user_agent}\|%{GREEDYDATA:http_x_forwarded_for}\|%{HOSTPORT:upstream_addr}\|%{BASE16FLOAT:upstream_response_time}\|%{BASE16FLOAT:request_time}\|%{GREEDYDATA:cookie_formax_preview}"
}
remove_field => ["message"] #已經將message字段拆分,可以將message字段刪除
}
date {
match => [ "time_local", "dd/MMM/yyyy:HH:mm:ss Z"] #nginx日志中的時間替換@timestamp
remove_field => ["time_local"] #刪除nginx日志時間字段
}
mutate {
rename => ["http_host", "host"] #nginx日志中http_host字段,替換host字段
}
}
}
output {
elasticsearch { # 輸出到elasticsearch
hosts => ["127.0.0.1:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}" #輸出索引格式
}
}
啟動
systemctl enable logstash
systemctl start logstash
Elasticsearch安裝配置
安裝
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.2.rpm
yum localinstall elasticsearch-6.3.2.rpm
配置
/etc/elasticsearch/elasticsearch.yml
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
#elasticsearch-head需要下列配置
http.cors.enabled: true
http.cors.allow-origin: "*"
啟動
systemctl enable elasticsearch
systemctl start elasticsearch
elasticsearch-head安裝
elasticsearch-head用於連接elasticsearch,並提供一個前端管理頁面
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
npm run start
open http://localhost:9100/
Kibana安裝配置
安裝
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.2-x86_64.rpm
yum localinstall kibana-6.3.2-x86_64.rpm
配置
默認配置就好
啟動
nohup /usr/share/kibana/bin/kibana &> /usr/share/kibana/logs/kibana.stdout &
nginx代理到kibana
安裝nginx
yum install nginx
配置
/etc/nginx/conf.d/kibana.conf
server {
listen 80;
server_name test.kibana.com;
root html;
access_log /var/log/nginx/test.kibana.com.access.log main;
error_log /var/log/nginx/test.kibana.com.error.log;
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_connect_timeout 10;
proxy_read_timeout 30;
proxy_send_timeout 180;
proxy_ignore_client_abort on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
proxy_set_header Host $host;
location /monitor {
default_type text/plain;
return 200 "OK";
}
location /echoip {
default_type text/plain;
return 200 $http_x_forwarded_for,$remote_addr;
}
location / {
expires off;
if ($server_port = "80") {
proxy_pass http://127.0.0.1:5601;
}
proxy_pass https://127.0.0.1:5601;
}
}
啟動
systemctl enable nginx
systemctl start nginx
后記
本文只是簡單介紹了一下ELK+Filebeat日志分析系統的安裝配置,以及一個簡單的nginx日志處理過程。要想更細致的學習ELK體系,可以看ELKstack 中文指南。雖然該書以ELK5版本進行講解,ELK6也可以看。
