ELK日志分析系統
在日常工作中會面臨很多問題,一般通過工作經驗和軟件自帶的日志或者系統日志來解決問題,如果1台或者幾台服務器,我們可以通過 linux命令,tail、cat通過grep、awk等過濾去查詢定位日志查問題,如果有幾十台幾百台的話這樣的操作太繁瑣效率低也不現實。所以建立出了一套集中式的方法
一個完整的集中式日志系統,需要包含以下幾個主要特點:
- 收集-能夠采集多種來源的日志數據
- 傳輸-能夠穩定的把日志數據傳輸到中央系統
- 存儲-如何存儲日志數據
- 分析-可以支持 UI 分析
- 警告-能夠提供錯誤報告,監控機制
ELK提供了一整套解決方案,並且都是開源軟件,之間互相配合使用,完美銜接,高效的滿足了很多場合的應用。目前主流的一種日志系統。
ELK簡介
ELK是三個開源軟件的縮寫,分別為:Elasticsearch 、 Logstash以及Kibana , 它們都是開源軟件。不過現在還新增了一個Beats,它是一個輕量級的日志收集處理工具(Agent),Beats占用資源少,適合於在各個服務器上搜集日志后傳輸給Logstash,官方也推薦此工具,目前由於原本的ELK Stack成員中加入了 Beats 工具所以已改名為Elastic Stack。
Elasticsearch是個開源分布式搜索引擎,提供搜集、分析、存儲數據三大功能。它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
Logstash 主要是用來日志的搜集、分析、過濾日志的工具,支持大量的數據獲取方式。一般工作方式為c/s架構,client端安裝在需要收集日志的主機上,server端負責將收到的各節點日志進行過濾、修改等操作在一並發往elasticsearch上去。
Kibana 也是一個開源和免費的工具,Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助匯總、分析和搜索重要數據日志。
Beats在這里是一個輕量級日志采集器,其實Beats家族有6個成員,早期的ELK架構中使用Logstash收集、解析日志,但是Logstash對內存、cpu、io等資源消耗比較高。相比 Logstash,Beats所占系統的CPU和內存幾乎可以忽略不計。
ELK工作原理
ELK部署搭建
1、基礎環境
1個主節點,2個數據節點,3台機器全部安裝jdk8(openjdk即可)Yum install -y java-1.8.0-openjdk
虛擬機IP | 部署工具 | 主機名 |
---|---|---|
192.168.200.11 | elasticsearch+kibana | elk-1 |
192.168.200.12 | elasticsearch+logstash | elk-2 |
192.168.200.13 | elasticsearch | elk-3 |
2、配置三台主機的hosts文件
三台一樣即可
[root@elk-1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.11 elk-1
192.168.200.12 elk-2
192.168.200.13 elk-3
3、在三台虛擬機上部署elasticsearch
將elasticsearch上傳到/root下並安裝:
[root@elk-1 ~]# rpm -ivh elasticsearch-6.0.0.rpm //其他兩台也安裝
warning: elasticsearch-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
1:elasticsearch-0:6.0.0-1 ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
修改三台的elasticsearch配置文件:
[root@elk-1 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v ^#
cluster.name: ELK
node.name: elk-1
node.master: true
node.data: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.200.11
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-1", "elk-2","elk-3"]
[root@elk-2 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: ELK
node.name: elk-2
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.200.12
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-1", "elk-2","elk-3"]
[root@elk-3 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: ELK
node.name: elk-3
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.200.13
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-1", "elk-2","elk-3"]
通過命令啟動elasticsearch查看運行狀態(三台命令相同,出現9200和9300則啟動成功):
[root@elk-1 ~]# systemctl restart elasticsearch.service
[root@elk-1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 911/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1002/master
tcp6 0 0 192.168.200.11:9200 :::* LISTEN 12367/java
tcp6 0 0 192.168.200.11:9300 :::* LISTEN 12367/java
tcp6 0 0 :::22 :::* LISTEN 911/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1002/master
檢測集群狀態:
[root@elk-1 ~]# curl '192.168.200.11:9200/_cluster/health?pretty'
{
"cluster_name" : "ELK",
"status" : "green", //為green則代表健康沒問題,yellow或者red 則是集群有問題
"timed_out" : false, //是否有超時
"number_of_nodes" : 3, //集群中的節點數量
"number_of_data_nodes" : 2, //集群中data節點的數量
"active_primary_shards" : 1,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
4、部署kibana
在主節點11上部署kibana,將kibana的rpm包上傳到/root下
[root@elk-1 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm
warning: kibana-6.0.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:kibana-6.0.0-1 ################################# [100%]
修改kibana配置文件
[root@elk-1 ~]# cat /etc/kibana/kibana.yml | grep -v ^# | grep -v ^$
server.port: 5601
server.host: "192.168.200.11"
elasticsearch.url: "http://192.168.200.11:9200"
啟動kibana
[root@elk-1 ~]# systemctl restart kibana.service
[root@elk-1 ~]# ps -ef |grep kibana
kibana 12580 1 20 10:25 ? 00:00:02 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 12597 2263 0 10:25 pts/1 00:00:00 grep --color=auto kibana
[root@elk-1 ~]# netstat -ntlp | grep node
tcp 0 0 192.168.200.11:5601 0.0.0.0:* LISTEN 12580/node
在瀏覽器訪問192.168.200.11:5601
5、部署logstash
在elk-2上傳logstash的rpm包到/root下並安裝
[root@elk-2 ~]# rpm -ivh logstash-6.0.0.rpm
warning: logstash-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:logstash-1:6.0.0-1 ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
修改logstash配置文件
[root@elk-2 ~]# vi /etc/logstash/logstash.yml
//修改第190行
http.host: "192.168.200.12"
配置logstash收集syslog日志
[root@elk-2 ~]# vim /etc/logstash/syslog.conf
input {
file {
path => "/var/log/messages" //需要為這個目錄修改644權限
type => "systemlog"
start_position => "beginning"
stat_interval => "3"
}
}
output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["192.168.200.11:9200"]
index => "system-log-%{+YYYY.MM.dd}"
}
}
}
[root@elk-2 ~]# chmod 644 -R /var/log/messages
[root@elk-2 ~]# chown -R logstash /var/lib/logstash/
檢測配置文件是否錯誤
[root@elk-2 ~]# ln -s /usr/share/logstash/bin/logstash /usr/bin
[root@elk-2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
啟動logstash,並查看端口
[root@elk-2 ~]# systemctl restart logstash.service
[root@elk-2 ~]# netstat -ntlp | grep 9600
tcp6 0 0 192.168.200.12:9600 :::* LISTEN 12890/java
6、在kibana上查看日志
使用elk-3遠程登陸elk-2使其生成日志
[root@elk-1 ~]# curl '192.168.200.11:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana eL1dAxyBTr6TTGptG0z29g 1 1 1 0 7.3kb 3.6kb
green open system-log-2021.11.01 _0u6J8_TQK2xrYsUZqr_Qw 5 1 10330 0 4.5mb 2.3mb
[root@elk-1 ~]# curl -XGET/DELETE '192.168.200.11:9200/system-log-2021.11.01?pretty'
{
"system-log-2021.11.01" : {
"aliases" : { },
"mappings" : {
"systemlog" : {
"properties" : {
"@timestamp" : {
"type" : "date"
},
"@version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"host" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"message" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"path" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
},
"settings" : {
"index" : {
"creation_date" : "1635734784607",
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "_0u6J8_TQK2xrYsUZqr_Qw",
"version" : {
"created" : "6000099"
},
"provided_name" : "system-log-2021.11.01"
}
}
}
}
7、Logstash收集Nginx日志
在elk-2上安裝Nginx
[root@elk-2 ~]# rpm -ivh nginx-1.16.1-1.el7.ngx.x86_64.rpm
warning: nginx-1.16.1-1.el7.ngx.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 7bd9bf62: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:nginx-1:1.16.1-1.el7.ngx ################################# [100%]
配置Logstash,檢查文件是否正確
[root@elk-2 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/tmp/elk_access.log"
start_position => "beginning"
type => "nginx"
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - % {USERNAME:remote_user} \[%{HTTPDATE:timest
amp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMB
ER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
source => "clientip"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.200.11:9200"]
index => "nginx-test-%{+YYYY.MM.dd}"
}
}
[root@elk-2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
編輯監聽Nginx日志配置文件
[root@elk-1 ~]# vi /etc/nginx/conf.d/elk.conf
server {
listen 80;
server_name elk.com;
location / {
proxy_pass http://192.168.40.11:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /tmp/elk_access.log main2;
}
修改Nginx日志配置文件
[root@elk-1 ~]# vim /etc/nginx/nginx.conf
//在http里添加以下內容
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
[root@elk-2 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk-2 ~]# systemctl start nginx
[root@elk-2 ~]# systemctl restart logstash
8、使用beats采集日志
在elk-3上安裝Beats
[root@elk-3 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm
warning: filebeat-6.0.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:filebeat-6.0.0-1 ################################# [100%]
修改Beats的配置文件
[root@elk-3 ~]# vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
#enabled: false //注釋掉該參數
paths:
- /var/log/elasticsearch/ELK.log //此處可自行改為想要監聽的日志文件
output.elasticsearch:
hosts: ["192.168.40.11:9200"]
[root@elk-3 ~]# systemctl start filebeat
在elk-1上使用curl '192.168.200.11:9200/_cat/indices?v’
[root@elk-1 ~]# curl '192.168.200.11:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open nginx-test-2021.11.01 aZqM9gJ0RgqwZCw8Zfmr9w 5 1 10544 0 3.1mb 1.6mb
green open .kibana eL1dAxyBTr6TTGptG0z29g 1 1 2 0 14.1kb 7kb
green open system-log-2021.11.01 _0u6J8_TQK2xrYsUZqr_Qw 5 1 20877 0 9.3mb 4.9mb
green open filebeat-6.0.0-2021.11.01 u0ZqpY7iTzC5B4nL2bRlFQ 3 1 59 0 80.2kb 31.4kb