一、ELK簡介
ELK是elastic 公司旗下三款產品ElasticSearch 、Logstash 、Kibana的首字母組合,主要用於日志收集、分析與報表展示。
ELK Stack包含:ElasticSearch、Logstash、Kibana。(ELK Stack 5.0版本以后-->Elastic Stack == ELK Stack+Beats)
ElasticSearch是一個搜索引擎,用來搜索、分析、存儲日志。它是分布式的,也就是說可以橫向擴容,可以自動發現,索引自動分片,總之很強大。
Logstash用來采集日志,把日志解析為Json格式交給ElasticSearch。
Kibana是一個數據可視化組件,把處理后的結果通過WEB界面展示。
Beats是一個輕量級日志采集器,其實Beats家族有5個成員。(早起的Logstash對性能資源消耗比較高,Beats性能和消耗可以忽略不計)
X-pach對Elastic Stack提供了安全、警報、監控、報表、圖標於一身的擴展包,收費。
官網:https://www.elastic.co/cn/
中文文檔:https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
二、單機架構圖
三、安裝ELK服務端
1、下載elasticsearch-6.2.4.rpm、logstash-6.2.4.rpm、kibana-6.2.4-x86_64.rpm
[root@server-1 src]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm [root@server-1 src]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.rpm [root@server-1 src]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-x86_64.rpm
2、rpm安裝elasticsearch-6.2.4.rpm
[root@server-1 src]# rpm -ivh elasticsearch-6.2.4.rpm 警告:elasticsearch-6.2.4.rpm: 頭V4 RSA/SHA512 Signature, 密鑰 ID d88e42b4: NOKEY 准備中... ################################# [100%] Creating elasticsearch group... OK Creating elasticsearch user... OK 正在升級/安裝... 1:elasticsearch-0:6.2.4-1 ################################# [100%] ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service ### You can start elasticsearch service by executing sudo systemctl start elasticsearch.service
3、安裝logstash-6.2.4.rpm
[root@server-1 src]# rpm -ivh logstash-6.2.4.rpm 警告:logstash-6.2.4.rpm: 頭V4 RSA/SHA512 Signature, 密鑰 ID d88e42b4: NOKEY 准備中... ################################# [100%] 正在升級/安裝... 1:logstash-1:6.2.4-1 ################################# [100%] which: no java in (/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin) could not find java; set JAVA_HOME or ensure java is in PATH chmod: 無法訪問"/etc/default/logstash": 沒有那個文件或目錄 警告:%post(logstash-1:6.2.4-1.noarch) 腳本執行失敗,退出狀態碼為 1
報錯,顯示需要JAVA環境,安裝JAVA
[root@server-1 src]# yum install jdk-8u172-linux-x64.rpm 已加載插件:fastestmirror 正在檢查 jdk-8u172-linux-x64.rpm: 2000:jdk1.8-1.8.0_172-fcs.x86_64 jdk-8u172-linux-x64.rpm 將被安裝 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 jdk1.8.x86_64.2000.1.8.0_172-fcs 將被 安裝 --> 解決依賴關系完成 依賴關系解決 ========================================================================================================================================== Package 架構 版本 源 大小 ========================================================================================================================================== 正在安裝: jdk1.8 x86_64 2000:1.8.0_172-fcs /jdk-8u172-linux-x64 279 M 事務概要 ========================================================================================================================================== 安裝 1 軟件包 總計:279 M 安裝大小:279 M Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction 警告:RPM 數據庫已被非 yum 程序修改。 ** 發現 1 個已存在的 RPM 數據庫問題, 'yum check' 輸出如下: smbios-utils-bin-2.3.3-8.el7.x86_64 有缺少的需求 libsmbios = ('0', '2.3.3', '8.el7') 正在安裝 : 2000:jdk1.8-1.8.0_172-fcs.x86_64 1/1 Unpacking JAR files... tools.jar... plugin.jar... javaws.jar... deploy.jar... rt.jar... jsse.jar... charsets.jar... localedata.jar... 驗證中 : 2000:jdk1.8-1.8.0_172-fcs.x86_64 1/1 已安裝: jdk1.8.x86_64 2000:1.8.0_172-fcs 完畢!
[root@server-1 src]# java -version java version "1.8.0_172" Java(TM) SE Runtime Environment (build 1.8.0_172-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) 您在 /var/spool/mail/root 中有新郵件 [root@server-1 src]#
再次安裝logstash-6.2.4.rpm
[root@server-1 src]# rpm -ivh logstash-6.2.4.rpm 警告:logstash-6.2.4.rpm: 頭V4 RSA/SHA512 Signature, 密鑰 ID d88e42b4: NOKEY 准備中... ################################# [100%] 軟件包 logstash-1:6.2.4-1.noarch 已經安裝 [root@server-1 src]#
4、安裝kibana-6.2.4-x86_64.rpm
[root@server-1 src]# rpm -ivh kibana-6.2.4-x86_64.rpm 警告:kibana-6.2.4-x86_64.rpm: 頭V4 RSA/SHA512 Signature, 密鑰 ID d88e42b4: NOKEY 准備中... ################################# [100%] 正在升級/安裝... 1:kibana-6.2.4-1 ################################# [100%] [root@server-1 src]#
四、相關配置與啟動服務
1、Elasticsearch配置
cluster.name: test-cluster #集群名稱 node.name: node-1 #節點名稱 path.data: /var/lib/elasticsearch #數據存放路徑 path.logs: /var/log/elasticsearch #日志存放路徑 network.host: 172.28.18.69 #監聽IP http.port: 9200 #監聽端口 discovery.zen.ping.unicast.hosts: ["172.28.18.69"] #集群各主機地址,單機模式就一個本機IP
2、啟動服務,並查看端口
[root@server-1 old]# systemctl start elasticsearch [root@server-1 old]# netstat -tunlp|grep java tcp6 0 0 172.28.18.69:9200 :::* LISTEN 5176/java tcp6 0 0 172.28.18.69:9300 :::* LISTEN 5176/java
3、curl查看端口信息
[root@server-1 old]# curl 172.28.18.69:9200 { "name" : "node-1", "cluster_name" : "test-cluster", "cluster_uuid" : "2oBg0RqYR2ewNeRfAN88zg", "version" : { "number" : "6.2.4", "build_hash" : "ccec39f", "build_date" : "2018-04-12T20:37:28.497551Z", "build_snapshot" : false, "lucene_version" : "7.2.1", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search"
4、logstash配置
[root@server-1 old]# vim /etc/logstash/logstash.yml path.data: /var/lib/logstash #數據存放路徑 http.host: "172.28.18.69" #監聽IP http.port: 9600 #監聽的端口 path.logs: /var/log/logstash #日志路徑
5、配置logstash用戶相應目錄寫權限
[root@server-1 old]# chown -R logstash /var/log/logstash/ /var/lib/logstash/ [root@server-1 old]#
6、新建一個配置文件用於收集系統日志
[root@server-1 old]# vim /etc/logstash/conf.d/syslog.conf input{ syslog{ type => "system-syslog" port => 10000 } } #輸出到elastcisearch output{ elasticsearch{ hosts => ["172.28.18.69:9200"] #elasticsearch服務地址 index => "system-syslog-%{+YYYY.MM}" #創建的索引 } }
7、測試日志收集配置文件
[root@server-1 old]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/ [root@server-1 old]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK
8、啟動logstash服務,並查看端口
[root@server-1 old]# systemctl start logstash [root@server-1 old]# netstat -tunlp|grep java tcp6 0 0 :::10000 :::* LISTEN 6046/java tcp6 0 0 172.28.18.69:9200 :::* LISTEN 5176/java tcp6 0 0 172.28.18.69:9300 :::* LISTEN 5176/java tcp6 0 0 172.28.18.69:9600 :::* LISTEN 6046/java udp 0 0 0.0.0.0:10000 0.0.0.0:* 6046/java
9600是logstash監聽端口,10000是系統日志收集輸入端口
9、查看elasticsearch日志收集的索引信息
[root@server-1 old]# curl http://172.28.18.69:9200/_cat/indices yellow open system-syslog-2019.07 REp7fM_gSaquo9PX2_sREQ 5 1 10 0 58.9kb 58.9kb [root@server-1 old]#
10、查看指定索引的詳細信息
[root@server-1 old]# curl http://172.28.18.69:9200/system-syslog-2019.07?pretty { "system-syslog-2019.07" : { "aliases" : { }, "mappings" : { "doc" : { "properties" : { "@timestamp" : { "type" : "date" }, "@version" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "facility" : { "type" : "long" }, "facility_label" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "host" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "message" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "priority" : { "type" : "long" }, "severity" : { "type" : "long" }, "severity_label" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "tags" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "type" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } } } } }, "settings" : { "index" : { "creation_date" : "1562809441246", "number_of_shards" : "5", "number_of_replicas" : "1", "uuid" : "REp7fM_gSaquo9PX2_sREQ", "version" : { "created" : "6020499" }, "provided_name" : "system-syslog-2019.07" } } } } [root@server-1 old]#
說明logstash與elasticsearch之間通訊正常
11、Kibana配置
[root@server-1 old]# vim /etc/kibana/kibana.yml server.port: 5601 #監聽端口 server.host: 172.28.18.69 #監聽IP elasticsearch.url: "http://172.28.18.69:9200" #elastcisearch服務地址 logging.dest: /var/log/kibana/kibana.log #日志路徑
12、新建日志目錄,並賦予kibana用戶寫權限
[root@server-1 old]# mkdir /var/log/kibana/ [root@server-1 old]# chown -R kibana /var/log/kibana/
13、啟動kibana服務,並查看端口
[root@server-1 old]# systemctl start kibana [root@server-1 old]# netstat -tunlp|grep 5601 tcp 0 0 172.28.18.69:5601 0.0.0.0:* LISTEN 7511/node
監聽成功
14、kibana漢化
下載漢化包
git clone https://github.com/anbai-inc/Kibana_Hanization.git
編譯
[root@server-1 src]# cd Kibana_Hanization/old/ [root@server-1 old]# python main.py /usr/share/kibana/
漢化過程較慢,耐心等待
[root@server-1 old]# python main.py /usr/share/kibana/ 恭喜,Kibana漢化完成! [root@server-1 old]#
15、重啟kibana服務
[root@server-1 old]# systemctl restart kibana
16、瀏覽器訪問http://172.28.18.69:5601
漢化成功
17、在kibana上創建索引
剛才Logstash中創建手機系統日志的配置文件,現在在Kibana上創建索引
系統管理--索引模式
在索引模式中輸入之前配置的system-syslog-*,表示匹配所有以system-syslog-開頭的索引
下一步,開始配置過濾條件,這里以時間戳為條件字段
創建索引模式
顯示了所有系統日志收集的字段,點擊發現,可以配置顯示的字段
六、ELK收集分析Nginx日志
1、Filebeat組件
ELK的beats組件常用的有以下幾種:
filebeat:進行文件和目錄采集,可用於收集日志數據。
heartbeat:系統間連通性檢測,可收集icmp, tcp, http等系統的連通性情況。
Winlogbeat:專門針對windows的事務日志的數據采集。
packetbeat:通過網絡抓包、協議分析,收集網絡相關數據。
metricbeat:進行指標采集,主要用於監控系統和軟件的性能。(系統、中間件等)
2、在Nginx主機上安裝Filebeat組件
[root@zabbix_server src]# cd /usr/local/src/ [root@zabbix_server src]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-x86_64.rpm [root@zabbix_server src]# rpm -ivh filebeat-6.2.4-x86_64.rpm
3、配置並啟動
首先配置FielBeat采集Nginx日志,並輸出到屏幕終端
[root@zabbix_server etc]# vim /etc/filebeat/filebeat.yml filebeat.prospectors: - type: log enabled: true paths: - /var/log/nginx/host.access.log #需要收集的日志文件路徑 output.console: #設置將日志信息輸出到屏幕終端 enable: true #將輸出到elastcisearch設置注釋掉 # output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"]
保存退出, 執行以下命令來測試配置文件設置
[root@zabbix_server etc]# filebeat -c /etc/filebeat/filebeat.yml
屏幕大量顯示nginx日志,表示配置成功,接下來我們配置輸出到Logstash服務
filebeat.prospectors: - type: log enabled: true paths: - /var/log/nginx/access.log #需要收集的日志文件路徑 fields: log_topics: nginx-172.28.18.75 #設置日志標題 output.logstash: hosts: ["172.28.18.69:10001"] #輸出到logstash服務地址和端口
然后在Logstash服務器上創建一個新的Nginx日志收集配置文件
input { beats { port=>10001 #設置日志輸入端口 } } output { if[fields][log_topics]=="nginx-172.28.18.75"{ elasticsearch { hosts=>["172.28.18.69:9200"] #輸出到elasticsearch服務地址 index=>"nginx-172.28.18.75-%{+YYYY.MM.dd}" #設置索引 } } }
重啟logstash服務、重啟nginx主機的filebeat服務
[root@server-1 old]# systemctl restart logstash
[root@zabbix_server filebeat]# service filebeat restart 2019-07-11T14:57:02.728+0800 INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] 2019-07-11T14:57:02.729+0800 INFO instance/beat.go:475 Beat UUID: 1435865e-4392-45fe-86a4-72ea77d3c75d 2019-07-11T14:57:02.729+0800 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4 2019-07-11T14:57:02.730+0800 INFO pipeline/module.go:76 Beat name: zabbix_server.jinglong Config OK Stopping filebeat: [確定] Starting filebeat: 2019-07-11T14:57:02.895+0800 INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] 2019-07-11T14:57:02.895+0800 INFO instance/beat.go:475 Beat UUID: 1435865e-4392-45fe-86a4-72ea77d3c75d 2019-07-11T14:57:02.895+0800 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4 2019-07-11T14:57:02.896+0800 INFO pipeline/module.go:76 Beat name: zabbix_server.jinglong Config OK [確定]
4、Kibana上配置索引
瀏覽器打開http://172.28.18.69:5601
系統管理-創建索引模式
可以看到已經有了nginx日志索引,創建索引模式
選擇發現
可以看到ngnix的日志了。