FileBeat+Kafka+Elasticsearch + Logstash + Kibana 實現高吞吐量日志監控
(ELK)是一套開源的日志管理方案。在志邦項目部署了三台服務器到生產環境,生產環境用了nginx做負載均衡,通常,日志被分散的儲存不同的服務器(tomcat)上。如果你管理數十上百台服務器,你還在使用依次登錄每台機器的傳統方法查閱日志。這樣是不是感覺很繁瑣和效率低下,對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心,而且志邦接口較多,接口報文存儲查詢較慢,目前也是存在放在es非關系數據上。
開源實時日志分析 ELK 平台能夠完美的解決我們上述的問題, ELK 由 ElasticSearch 、 Logstash 和 Kiabana 三個開源工具組成。
Logstash:負責日志的收集,處理
Elasticsearch:負責日志檢索,分析和儲存
Kibana:負責日志的可視化,圖形化
2.1 安裝Elasticsearch
以elk用戶啟動ES (以root用戶啟動會報錯)
所以我們創建一個elk文件以及elk用戶
創建 elk用戶,elk文件夾,並把文件夾權限賦給elk用戶
mkdir elk
useradd elk
chown -R elk elk
chgrp -R elk elk
解壓 elasticsearch-7.1.0-linux-x86_64.tar.gz
切換用戶
su elk
cd elasticsearch-7.1.0
cd config
配置文件
elasticsearch.yml :
node.name: node-1 #集群節點
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 10.10.0.16 #外網訪問
cluster.initial_master_nodes: ["node-1"] #主節點
http.cors.enabled: true
http.cors.allow-origin: "*"
啟動
./bin/elasticsearch &
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[3] max number of threads [1024] for user [lish] likely too low, increase to at least [2048]
解決:[1]修改vim /etc/sysctl.conf添加下面配置:
vm.max_map_count=65536
然后執行命令:
sysctl -p
[2]修改vim /etc/security/limits.conf添加如下內容:
* soft nofile 65535
* hard nofile 65535
[3] vim /etc/security/limits.d/90-nproc.conf
修改如下內容:
* soft nproc 1024
#修改為
* soft nproc 2048
啟動
./bin/elasticsearch &
瀏覽器輸入:ip:port
例如10.10.0.16:9200
2.2 安裝Logstash
1、解壓安裝包
tar -zxvf logstash-7.1.0.tar.gz
cd logstash-7.1.0/
2、編寫配置文件
cd conf
vim conf/log4j_to_es.conf
輸入以下內容:
# For detail structure of this file # Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html input {
tcp { port => 4560 codec => json_lines #文本字符 mode => "server" host => "10.10.0.16" #logstash 應用服務器地址 } } filter { #Only matched data are send to output. } output { # For detail config for elasticsearch as output, # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html elasticsearch { action => "index" #The operation on ES hosts => "10.10.0.16:9200" #ElasticSearch host, can be array. index => "oms%{+YYYY.MM}" #每個月創建一個索引 } #stdout { codec => rubydebug } } |
logsatsh輸入源為tcp類型,監聽端口4560 保證與項目保持一致
Kafka數據源
# For detail structure of this file # Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html input { kafka { bootstrap_servers => ["10.10.183.152:9092,10.10.183.153:9092,10.10.183.154:9092"] topics =>["monitor-log"] group_id => "logstash" codec => json { charset => "UTF-8" } # codec => "plain" } } filter { mutate { remove_field => [ "@version","_id","_score","_type","log.offset" ] } #Only matched data are send to output. } output { # For detail config for elasticsearch as output, # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html elasticsearch { action => "index" #The operation on ES hosts => "10.10.183.155:9200" #ElasticSearch host, can be array. index => "prod-%{+YY.MM}" #The index to write data to. } # stdout { codec => rubydebug } }
hap logback
啟動logstash,讀取配置文件(使用-f指定配置文件)
./bin/logstash -f config/log4j_to_es.conf &
2.3 安裝Kibana
1、解壓壓縮包
tar -zxvf kibana-7.1.0-linux-x86_64.tar.gz
cd kibana-7.1.0/
2、配置Kibana(啟動端口,elasticsearch url)
cd config
vim kibana.yml
server.port: 5601
elasticsearch.url: http://10.10.0.16:9200
3、啟動kibana:
./bin/kibana &
打開10.10.10.016:5601
選擇創建索引
創建完成可以在這看到
所有索引在這邊,可以進行刪除等操作
到這里進行查詢
如果在你的logstash里面有錯誤 為json解析錯誤如下圖,那么修改配置文件
注釋輸入源格式
如果是json 它會解析字段,並按照字段存儲
否則將全部存儲在message