一、 介紹
1、日志主要包括系統日志、應用程序日志和安全日志。系統運維和開發人員可以通過日志了解服務器軟硬件信息、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日志可以了解服務器的負荷,性能安全性,從而及時采取措施糾正錯誤。
2、通常,日志被分散的儲存不同的設備上。如果你管理數十上百台服務器,你還在使用依次登錄每台機器的傳統方法查閱日志。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日志管理,例如:開源的syslog,將所有服務器上的日志收集匯總。
3、集中化管理日志后,日志的統計和檢索又成為一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。
4、開源實時日志分析ELK平台能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。官方網站:https://www.elastic.co/products
1.Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
2.Logstash是一個完全開源的工具,他可以對你的日志進行收集、分析,並將其存儲供以后使用(如,搜索)。
3.kibana 也是一個開源和免費的工具,他Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。
10-Mac安裝ELK
brew install elasticsearch,安裝7.10.2版本
To have launchd start elasticsearch now and restart at login: brew services start elasticsearch Or, if you don't want/need a background service you can just run: elasticsearch ==> Summary 🍺 /usr/local/Cellar/elasticsearch/7.10.2: 156 files, 113.5MB ==> Caveats ==> elasticsearch Data: /usr/local/var/lib/elasticsearch/ Logs: /usr/local/var/log/elasticsearch/elasticsearch_yangjun.log Plugins: /usr/local/var/elasticsearch/plugins/ Config: /usr/local/etc/elasticsearch/ To have launchd start elasticsearch now and restart at login: brew services start elasticsearch Or, if you don't want/need a background service you can just run: elasticsearch yangjun@yangjun ~ %
brew install logstash,安裝7.10.2版本
注意版本要和elasticsearch一致,實在不行就自己去下載tar包下來
下載地址:https://www.elastic.co/cn/downloads/past-releases/logstash-7-10-2
yangjun@yangjun ~ % brew install logstash ==> Downloading https://ghcr.io/v2/homebrew/core/logstash/manifests/7.14.1 Already downloaded: /Users/yangjun/Library/Caches/Homebrew/downloads/bb7e451818765e37a15312cf7b96238736a51bef9ad2d895a32e73a34d1bbe4b--logstash-7.14.1.bottle_manifest.json ==> Downloading https://ghcr.io/v2/homebrew/core/logstash/blobs/sha256:02db87c4194fd1ff3f2dd864b7c1deb3350a0c9e434b4f182845e752f8e18c03 Already downloaded: /Users/yangjun/Library/Caches/Homebrew/downloads/45f170bdc70e11cbeb337c7ef6787baedbd3f2844a69d09df6a4a6d80b59c757--logstash--7.14.1.big_sur.bottle.tar.gz ==> Reinstalling logstash ==> Pouring logstash--7.14.1.big_sur.bottle.tar.gz ==> Caveats Configuration files are located in /usr/local/etc/logstash/ To start logstash: brew services start logstash Or, if you don't want/need a background service you can just run: /usr/local/opt/logstash/bin/logstash ==> Summary 🍺 /usr/local/Cellar/logstash/7.14.1: 12,576 files, 289.1MB yangjun@yangjun ~ %
在conf下添加 logstash.conf
input { beats { port => 5044 } tcp { host => "127.0.0.1" port => 9250 mode => "server" tags => ["tags"] codec => json_lines } } output { elasticsearch { hosts => ["http://localhost:9200"] #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } }
啟動 ./bin/logstash -f ./config/logstash.conf
brew install kibana,安裝7.10.2版本
yangjun@yangjun ~ % brew install kibana Updating Homebrew... ==> Auto-updated Homebrew! Updated 2 taps (homebrew/core and homebrew/cask). ==> New Formulae jpdfbookmarks ==> Updated Formulae Updated 7 formulae. ==> Updated Casks Updated 12 casks. Warning: node@10 has been deprecated because it is not supported upstream! ==> Downloading https://ghcr.io/v2/homebrew/core/node/10/manifests/10.24.1_1 ######################################################################## 100.0% ==> Downloading https://ghcr.io/v2/homebrew/core/node/10/blobs/sha256:84095e53ee ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh ######################################################################## 100.0% ==> Downloading https://ghcr.io/v2/homebrew/core/kibana/manifests/7.10.2 ######################################################################## 100.0% ==> Downloading https://ghcr.io/v2/homebrew/core/kibana/blobs/sha256:c218ab10fca ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh ######################################################################## 100.0% Warning: kibana has been deprecated because it is switching to an incompatible license! ==> Installing dependencies for kibana: node@10 ==> Installing kibana dependency: node@10 ==> Pouring node@10--10.24.1_1.big_sur.bottle.tar.gz 🍺 /usr/local/Cellar/node@10/10.24.1_1: 4,308 files, 53MB ==> Installing kibana ==> Pouring kibana--7.10.2.big_sur.bottle.tar.gz ==> Caveats Config: /usr/local/etc/kibana/ If you wish to preserve your plugins upon upgrade, make a copy of /usr/local/opt/kibana/plugins before upgrading, and copy it into the new keg location after upgrading. To have launchd start kibana now and restart at login: brew services start kibana Or, if you don't want/need a background service you can just run: kibana ==> Summary 🍺 /usr/local/Cellar/kibana/7.10.2: 29,153 files, 300.8MB ==> Caveats ==> kibana Config: /usr/local/etc/kibana/ If you wish to preserve your plugins upon upgrade, make a copy of /usr/local/opt/kibana/plugins before upgrading, and copy it into the new keg location after upgrading. To have launchd start kibana now and restart at login: brew services start kibana Or, if you don't want/need a background service you can just run: kibana yangjun@yangjun ~ %
mac下elk的安裝
安裝前准備
1.mac下安裝最方便的就是Homebrew了。首先安裝Homebrew。
2.elk需要java8的環境,java -version看是否當前java環境是java8。
安裝elasticsearch
brew install elasticsearch && brew info elasticsearch
啟動/關閉/重啟elasticsearch:
brew service start elasticsearch
brew service stop elasticsearch
brew service restart elasticsearch
啟動之后使用您喜歡的瀏覽器檢查它是否在localhost和默認端口上正確運行:http:// localhost:9200
輸出應該如下所示:
安裝Logstash:
brew install logstash
啟動/關閉/重啟logstash
brew services start logstash
brew services stop logstash
brew services restart logstash
由於我們尚未配置Logstash管道,因此啟動Logstash不會產生任何有意義的結果。我們將在下面的另一個步驟中返回配置Logstash。
安裝Kibana
brew install kibana
啟動Kibana並檢查所有ELK服務是否正在運行:
brew services start kibana
brew services list
Kibana需要進行一些配置更改才能正常工作,打開Kibana配置文件:kibana.yml
sudo vi /usr/local/etc/kibana/kibana.yml
取消注釋用於定義Kibana端口和Elasticsearch實例的指令:
server.port: 5601
elasticsearch.url: "http://localhost:9200”
如果一切順利,請在localhost:5601/status打開Kibana。你應該看到這樣的東西:
恭喜,您已經在Mac上成功安裝了ELK!
發送一些數據
您已准備好開始向Elasticsearch發送數據並享受堆棧提供的所有優點。下面是一個Logstash管道將syslog日志發送到堆棧的示例。 首先,您需要創建一個新的Logstash配置文件:
sudo vim /etc/logstash/conf.d/syslog.conf
輸入以下配置:
input {
file {
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "syslog-demo"
}
stdout { codec => rubydebug }
}
然后重啟logstash.
在Kibana的Management選項卡中,您應該看到由新的Logstash管道創建的新創建的“syslog-demo”索引。
將其作為index pattern輸入,然后在下一步中選擇@timestamp字段作為時間過濾器字段名稱。
我們都准備好了!打開Discover頁面,您將在Kibana中看到syslog數據。
mac下elk的安裝-end
在需要收集日志的所有服務上部署logstash,作為logstash agent(logstash shipper)用於監控並過濾收集日志,
將過濾后的內容發送到logstash indexer,logstash indexer將日志收集在一起交給全文搜索服務ElasticSearch,
可以用ElasticSearch進行自定義搜索通過Kibana 來結合自定義搜索進行頁面展示。
二、 安裝ElasticSearch
1、 安裝jdk
wget http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz
mkdir /usr/local/java
tar -zxf jdk-8u45-linux-x64.tar.gz -C /usr/local/java/
export JAVA_HOME=/usr/local/java/jdk1.8.0_4
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH
2、 安裝ElasticSearch
wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.2.0/elasticsearch-2.2.0.tar.gz
解壓:tar -zxf elasticsearch-2.2.0.tar.gz -C ./
安裝elasticsearch的head插件:
cd /data/program/software/elasticsearch-2.2.0
./bin/plugin install mobz/elasticsearch-head
執行結果:
安裝elasticsearch的kopf插件
./bin/plugin install lmenezes/elasticsearch-kopf
執行結果:
創建elasticsearch的data和logs目錄
mkdir data
mkdir logs
配置elasticsearch的配置文件
cd config/
備份一下源文件:
cp elasticsearch.yml elasticsearch.yml_back
編輯配置文件:
vim elasticsearch.yml
配置內容如下:
cluster.name: dst98 主機名稱
node.name: node-1
path.data: /data/program/software/elasticsearch-2.2.0/data
path.logs: /data/program/software/elasticsearch-2.2.0/logs
network.host: 10.15.0.98 主機IP地址
network.port: 9200 主機端口
啟動elasticsearch: ./bin/elasticsearch
報如下錯誤:說明不能以root賬戶啟動,需要創建一個普通用戶,用普通用戶啟動才可以。
[root@dst98 elasticsearch-2.2.0]# ./bin/elasticsearch
Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
添加用戶及用戶組
#groupadd search
#useradd -g search search
將data和logs目錄的屬主和屬組改為search
#chown search.search /elasticsearch/ -R
然后切換用戶並且啟動程序:
su search
./bin/elasticsearch
后台啟動:nohup ./bin/elasticsearch &
啟動成功后瀏覽器訪問如下:
通過安裝head插件可以查看集群的一些信息,訪問地址及結果如下:
三、安裝Kibana
下載kibana:
wget https://download.elastic.co/kibana/kibana/kibana-4.4.0-linux-x64.tar.gz
解壓:
tar -zxf kibana-4.4.0-linux-x64.tar.gz -C ./
重命名:
mv kibana-4.4.0-linux-x64 kibana-4.4.0
先備份配置文件:
/data/program/software/kibana-4.4.0/config
cp kibana.yml kibana.yml_back
修改配置文件:
server.port: 5601
server.host: "10.15.0.98"
elasticsearch.url: "http://10.15.0.98:9200" --ip為server的ip地址
kibana.defaultAppId: "discover"
elasticsearch.requestTimeout: 300000
elasticsearch.shardTimeout: 0
啟動程序:
nohup ./bin/kibana &
四、配置Logstash
下載logstash到要采集日志的服務器上和安裝ELK的機器上。
wget https://download.elastic.co/logstash/logstash/logstash-2.2.0.tar.gz
解壓: tar -zxf logstash-2.2.0.tar.gz -C ./
運行如下命令進行測試:
./bin/logstash -e 'input { stdin{} } output { stdout {} }'
Logstash startup completed
Hello World! #輸入字符
2015-07-15T03:28:56.938Z noc.vfast.com Hello World! #輸出字符格式
注:其中-e參數允許Logstash直接通過命令行接受設置。使用CTRL-C命令可以退出之前運行的Logstash。
1、配置ElasticSearch上的LogStash讀取redis里的日志寫到ElasticSearch
進入logstash目錄新建一個配置文件:
cd logstash-2.2.0
touch logstash-indexer.conf #文件名隨便起
寫如下配置到新建立的配置文件:
input和output根據日志服務器數量,可以增加多個。
input {
redis {
data_type => "list"
key => "mid-dst-oms-155"
host => "10.15.0.96"
port => 6379
db => 0
threads => 10
}
}
output {
if [type] == "mid-dst-oms-155"{
elasticsearch {
hosts => "10.15.0.98"
index => "mid-dst-oms-155"
codec => "json"
}
}
}
啟動logstash:
nohup ./bin/logstash -f logstash-indexer.conf -l logs/logstash.log &
2、配置客戶端的LogStash讀取日志寫入到redis
進入logstash目錄新建一個配置文件:
cd logstash-2.2.0
touch logstash_agent.conf #文件名隨便起
寫如下配置到新建立的配置文件:
input和output根據日志服務器數量,可以增加多個。
input {
file {
path => [“/data/program/logs/MID-DST-OMS/mid-dst-oms.txt”]
type => “mid-dst-oms-155”
}
}
output{
redis {
host => “125.35.5.98”
port => 6379
data_type => “list”
key => “mid-dst-oms-155”
}
}
啟動logstash:
nohup ./bin/logstash -f logstash_agent.conf -l logs/logstash.log &
備注:
logstash中input參數設置:
1. start_position:設置beginning保證從文件開頭讀取數據。
2. path:填入文件路徑。
3. type:自定義類型為tradelog,由用戶任意填寫。
4. codec:設置讀取文件的編碼為GB2312,用戶也可以設置為UTF-8等等
5. discover_interval:每隔多久去檢查一次被監聽的 path 下是否有新文件,默認值是15秒
6. sincedb_path:設置記錄源文件讀取位置的文件,默認為文件所在位置的隱藏文件。
7. sincedb_write_interval:每隔15秒記錄一下文件讀取位置
2. 前言
2.1. 現狀
以前,查看日志都是通過SSH客戶端登服務器去看,使用較多的命令就是 less 或者 tail。如果服務部署了好幾台,就要分別登錄到這幾台機器上看,還要注意日志打印的時間(比如,有可能一個操作過來產生好的日志,這些日志還不是在同一台機器上,此時就需要根據時間的先后順序推斷用戶的操作行為,就要在這些機器上來回切換)。而且,搜索的時候不方便,必須對vi,less這樣的命令很熟悉,還容易看花了眼。為了簡化日志檢索的操作,可以將日志收集並索引,這樣方便多了,用過Lucene的人應該知道,這種檢索效率是很高的。基本上每個互聯網公司都會有自己的日志管理平台和監控平台(比如,Zabbix),無論是自己搭建的,還是用的阿里雲這樣的雲服務提供的,反正肯定有。下面,我們利用ELK搭建一個相對真實的日志管理平台。
2.2. 日志格式
我們的日志,現在是這樣的:
每條日志的格式,類似於這樣:
2018-08-22 00:34:51.952 [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-C-1] [com.cjs.handler.MessageHandler][39] - 監聽到注冊事件消息:
2.3. logback.xml
2.4. 環境介紹
在本例中,各個系統的日志都在/data/logs/${projectName},比如:
Filebeat,Logstash,Elasticsearch,Kibana都在一台虛擬機上,而且都是單實例,而且沒有別的中間件
由於,日志每天都會歸檔,且實時日志都是輸出在info.log或者error.log中,所以Filebeat采集的時候只需要監視這兩個文件即可。
3. Filebeat配置
Filebeat的主要配置在於filebeat.yml配置文件中的 filebeat.inputs 和 output.logstash 區域:
#=========================== Filebeat inputs ============================= filebeat.inputs: - type: log enabled: true # 要抓取的文件路徑 paths: - /data/logs/oh-coupon/info.log - /data/logs/oh-coupon/error.log # 添加額外的字段 fields: log_source: oh-coupon fields_under_root: true # 多行處理 # 不以"yyyy-MM-dd"這種日期開始的行與前一行合並 multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2} multiline.negate: true multiline.match: after # 5秒鍾掃描一次以檢查文件更新 scan_frequency: 5s # 如果文件1小時都沒有更新,則關閉文件句柄 close_inactive: 1h # 忽略24小時前的文件 #ignore_older: 24h - type: log enabled: true paths: - /data/logs/oh-promotion/info.log - /data/logs/oh-promotion/error.log fields: log_source: oh-promotion fields_under_root: true multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2} multiline.negate: true multiline.match: after scan_frequency: 5s close_inactive: 1h ignore_older: 24h #================================ Outputs ===================================== #-------------------------- Elasticsearch output ------------------------------ #output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key"
4. Logstash配置
4.1. logstash.yml
# X-Pack Monitoring # https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: "logstash_system" xpack.monitoring.elasticsearch.password: "123456" xpack.monitoring.elasticsearch.url: ["http://localhost:9200"]
4.2. 管道配置
input { beats { port => "5044" } } filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:log_date}\s+\[%{LOGLEVEL:log_level}" } } date { match => ["log_date", "yyyy-MM-dd HH:mm:ss.SSS"] target => "@timestamp" } } output { if [log_source] == "oh-coupon" { elasticsearch { hosts => [ "localhost:9200" ] index => "oh-coupon-%{+YYYY.MM.dd}" user => "logstash_internal" password => "123456" } } if [log_source] == "oh-promotion" { elasticsearch { hosts => [ "localhost:9200" ] index => "oh-promotion-%{+YYYY.MM.dd}" user => "logstash_internal" password => "123456" } } }
4.3. 插件
Logstash針對輸入、過濾、輸出都有好多插件
關於Logstash的插件在之前的文章中未曾提及,因為都是配置,所以不打算再單獨寫一篇了,這里稍微重點的提一下,下面幾篇文章對此特別有幫助:
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html
https://www.elastic.co/guide/en/logstash/current/filebeat-modules.html
https://www.elastic.co/guide/en/logstash/current/output-plugins.html
https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html
https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
本例中,到了輸入插件:beats,過濾插件:grok和date,輸出插件:elasticsearch
這里,最最重要的是 grok ,利用這個插件我們可以從消息中提取一些我們想要的字段
grok
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
date
字段引用
5. Elasticsearch配置
5.1. elasticsearch.yml
xpack.security.enabled: true
其它均為默認
6. Kibana配置
6.1. kibana.yml
server.port: 5601 server.host: "192.168.101.5" elasticsearch.url: "http://localhost:9200" kibana.index: ".kibana" elasticsearch.username: "kibana" elasticsearch.password: "123456" xpack.security.enabled: true xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"
7. 啟動服務
7.1. 啟動Elasticsearch
[root@localhost ~]# su - cheng [cheng@localhost ~]$ cd $ES_HOME [cheng@localhost elasticsearch-6.3.2]$ bin/elasticsearch
7.2. 啟動Kibana
[cheng@localhost kibana-6.3.2-linux-x86_64]$ bin/kibana
7.3. 啟動Logstash
[root@localhost logstash-6.3.2]# bin/logstash -f second-pipeline.conf --config.test_and_exit [root@localhost logstash-6.3.2]# bin/logstash -f second-pipeline.conf --config.reload.automatic
7.4. 啟動Filebeat
[root@localhost filebeat-6.3.2-linux-x86_64]# rm -f data/registry [root@localhost filebeat-6.3.2-linux-x86_64]# ./filebeat -e -c filebeat.yml -d "publish"
8. 演示
9. 參考
https://www.cnblogs.com/liuxinan/p/5336971.html