一、filebeat安裝、配置及測試
1、安裝filebeat
# yum install filebeat-6.6.1-x86_64.rpm
2、配置filebeat收集系統日志輸出到文件中(/etc/filebeat/filebeat.yml)
filebeat.prospectors: - input_type: log paths: - /var/log/*.log - /var/log/messages exclude_lines: ["^DBG","^$"] document_type: system-log-5612 output.file: path: "/tmp" filename: "filebeat.txt"
3、啟動filebeat服務
systemctl start filebeat
4、向系統(/var/log/messages)日志插入數據,然后通過查看filebeat.txt文件是是否收集到了數據。
5、配置filebeat收集系統日志輸出到redis中(/etc/filebeat/filebeat.yml)
# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$" filebeat.prospectors: - input_type: log paths: - /var/log/*.log - /var/log/messages exclude_lines: ["^DBG","^$"] document_type: system-log-5612 output.redis: hosts: "192.168.56.12" db: "3" port: "6379" password: "123456" key: "system-log-5612" # systemctl restart filebeat # 向/var/log/messages中插入數據 # redis中驗證數據是否存在
6、將redis中存放的系統日志輸出到elasticsearch中
# cat redis-elasticsearch.conf input { redis { data_type => "list" host => "192.168.56.12" db => "3" port => "6379" password => "123456" key => "system-log-5612" } } output { elasticsearch { hosts => ["192.168.56.11:9200"] index => "system-log-5612-%{+YYYY.MM.dd}" } } # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-elasticsearch.conf -t # systemctl restart logstash
7、測試
# echo "aaaaaaaaaaaa" >> /var/log/messages # echo "bbbbbbbbbbbb" >> /var/log/messages
二、filebeat實驗配置信息
環境信息:
服務器描述 | IP地址 | 應用 |
web服務器 | 192.168.56.100 | nginx、filebeat |
redis服務器 | 192.168.56.12 | redis |
logstash服務器端 | 192.168.56.11 | logstash |
elasticsearch服務集群 | 192.168.56.15/16 | java、elasticsearch |
kibana服務器 | 192.168.56.12 | kibana、nginx反向代理認證 |
1、filebeat配置文件,filebeat收集nginx日志並輸出到redis數據庫服務器
# grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 key: "nginx-log"
2、logstash server端配置文件,從redis中讀取數據輸出到elasticsearch服務中
# cat /etc/logstash/conf.d/redis-es-logstash-nginx.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "nginx-log" } } output { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } }
3、kibana配置文件
# grep -Evi "^#|^$" /etc/kibana/kibana.yml server.port: 5601 server.host: "192.168.56.12" elasticsearch.hosts: ["http://192.168.56.15:9200","http://192.168.56.16:9200"]
4、nginx 反向代理kibana配置文件
# cat /etc/nginx/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format access_log_json '{"user_ip":"$http_x_forwarded_for","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_rqp":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}'; sendfile on; keepalive_timeout 65; include conf.d/*.conf; } # cat /etc/nginx/conf.d/http-www.conf server { listen 81; server_name localhost; auth_basic "User Authentication"; auth_basic_user_file /etc/nginx/conf.d/kibana.passwd; access_log /var/log/nginx/http-access.log access_log_json; location / { proxy_set_header Host $host; proxy_set_header x-for $remote_addr; proxy_set_header x-server $host; proxy_set_header x-agent $http_user_agent; proxy_pass http://kibana; } } # cat /etc/nginx/conf.d/upstream.conf upstream kibana { server 192.168.56.12:5601; } # cat /etc/nginx/conf.d/kibana.passwd admin:$apr1$21NJ.Fx/$gmT0bwS4GoW1gmsHDRq911
三、filebeat 收集多日志文件
# 1、filebeat收集nginx訪問日志、系統日志,輸出到redis服務器中。 # grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log tags: ["nginx-log-56-100"] - type: log enabled: true paths: - /var/log/messages tags: ["system-messages-log-56-100"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 timeout: 5 key: "default_list" # 2、logstash服務端從redis數據庫中讀取數據並輸出到Elasticsearch服務器中。 # cat redis-es-logstash-nginx.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "default_list" } } output { if "nginx-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-56100-%{+YYYY.MM.dd}" } } if "system-messages-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "system-messages-log-56100-%{+YYYY.MM.dd}" } } }
四、filebeat 收集多日志文件(syslog、nginx、java 多行合並)
# grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log tags: ["nginx-log-56-100"] - type: log enabled: true paths: - /var/log/messages tags: ["system-messages-log-56-100"] - type: log enabled: true paths: - /data/tomcat/logs/catalina.out tags: ["tomcat-catalina-log-56-100"] multiline: pattern: '^\[' negate: true match: after filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 timeout: 5 key: "default_list" # cat redis-es-logstash-nginx-system-tomcat.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "default_list" } } output { if "nginx-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-56100-%{+YYYY.MM.dd}" } } if "system-messages-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "system-messages-log-56100-%{+YYYY.MM.dd}" } } if "tomcat-catalina-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "tomcat-catalina-log-56100-%{+YYYY.MM.dd}" } } }