ELK-日志系統搭建過程


 

 

 

ELK是ElasticSerach、Logstash、Kibana三款產品名稱的首字母集合,用於日志的搜集和搜索,今天我們一起搭建和體驗基於ELK的日志服務;

環境規划

本次實戰需要兩台電腦(或者vmware下的兩個虛擬機),操作系統都是CentOS7,它們的身份、配置、地址等信息如下:

hostname

IP地址

身份

配置

elk-server

192.168.119.132

ELK服務端,接收日志,提供日志搜索服務

雙核,4G內存

nginx-server

192.168.119.133

Nginx服務端,產生的訪問日志通過上報到Logstash

雙核,2G內存

部署情況簡介

運行時的部署情況如下圖所示:

業務請求到達nginx-server機器上的Nginx; Nginx響應請求,並在access.log文件中增加訪問記錄; FileBeat搜集新增的日志,通過LogStash的5044端口上傳日志; LogStash將日志信息通過本機的9200端口傳入到ElasticSerach; 搜索日志的用戶通過瀏覽器訪問Kibana,服務器端口是5601; Kibana通過9200端口訪問ElasticSerach;

安裝JDK

首先請在elk-server機器上JDK8;

在ELK官方文檔中(https://www.elastic.co/guide/en/elasticsearch/hadoop/6.2/requirements.html),推薦的JDK版本為8,如下圖所示:

在CentOS7安裝JDK8的步驟請參考《CentOS7安裝JDK8》;

創建用戶

ElasticSerach要求以非root身份啟動,所以我們要創建一個用戶:

1. 創建用戶組:groupadd elasticsearch;

2. 創建用戶加入用戶組:useradd elasticsearch -g elasticsearch;

3. 設置ElasticSerach文件夾為用戶elasticsearch所有:chown -R elasticsearch.elasticsearch /usr/local/work/elasticsearch-6.2.3;

系統設置

設置hostname,打開文件/etc/hostname,將內容改為elk-server 關閉防火牆(如果因為其他原因不能關閉防火牆,也請不要禁止80端口):systemctl stop firewalld.service 禁止防火牆自動啟動:systemctl disable firewalld.service 打開文件/etc/security/limits.conf,添加下面四行內容:

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

5. 打開文件/etc/sysctl.conf,添加下面一行內容:

vm.max_map_count=655360

6. 加載sysctl配置,執行命令:sysctl -p 
7. 重啟電腦;

elk-server:安裝文件准備
請在ELK官網https://www.elastic.co/downloads下載以下文件:
1. elasticsearch-6.2.3.tar.gz;
2. logstash-6.2.3.tar.gz;
3. kibana-6.2.3-linux-x86_64.tar.gz;

上述三個文件,推薦在CentOS7的命令行輸入以下四個命令下載:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.3.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.tar.gz
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-linux-x86_64.tar.gz

下載完畢后,創建目錄/usr/local/work,將剛剛下載的三個文件全部在這個目錄下解壓,得到以下三個文件夾:

1. /usr/local/work/elasticsearch-6.2.3

2. /usr/local/work/logstash-6.2.3

3. kibana-6.2.3-linux-x86_64

啟動ElasticSerach

切換到用戶elasticsearch:su elasticsearch; 進入目錄/usr/local/work/elasticsearch-6.2.3; 執行啟動命令:bin/elasticsearch -d,此時會在后台啟動elasticsearch; 查看啟動日志可執行命令:tail -f /usr/local/work/elasticsearch-6.2.3/logs/elasticsearch.log,大約五到十分鍾后啟動成功,提示如下:

[2018-04-07T10:12:27,392][INFO ][o.e.n.Node               ] initialized
[2018-04-07T10:12:27,392][INFO ][o.e.n.Node               ] [MNb1nGq] starting ...
[2018-04-07T10:12:39,676][INFO ][o.e.t.TransportService   ] [MNb1nGq] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-04-07T10:12:42,772][INFO ][o.e.c.s.MasterService    ] [MNb1nGq] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {MNb1nGq}{MNb1nGq6Tn6VskdKFQckow}{_DglQhgmRsGAF2D7eTfVfg}{127.0.0.1}{127.0.0.1:9300}
[2018-04-07T10:12:42,776][INFO ][o.e.c.s.ClusterApplierService] [MNb1nGq] new_master {MNb1nGq}{MNb1nGq6Tn6VskdKFQckow}{_DglQhgmRsGAF2D7eTfVfg}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {MNb1nGq}{MNb1nGq6Tn6VskdKFQckow}{_DglQhgmRsGAF2D7eTfVfg}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-07T10:12:42,817][INFO ][o.e.g.GatewayService     ] [MNb1nGq] recovered [0] indices into cluster_state
[2018-04-07T10:12:42,821][INFO ][o.e.h.n.Netty4HttpServerTransport] [MNb1nGq] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-04-07T10:12:42,821][INFO ][o.e.n.Node               ] [MNb1nGq] starte

5. 執行curl命令檢查服務是否正常響應:curl 127.0.0.1:9200,收到響應如下:

[elasticsearch@elk-server work]$ curl 127.0.0.1:9200
{
  "name" : "MNb1nGq",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "ZHkI7PCQTnCqMBM6rhyT5g",
  "version" : {
    "number" : "6.2.3",
    "build_hash" : "c59ff00",
    "build_date" : "2018-03-13T10:06:29.741383Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

至此,ElasticSerach服務啟動成功,接下來是Logstash;

配置和啟動Logstash

在目錄/usr/local/work/logstash-6.2.3下創建文件default.conf,內容如下:

# 監聽5044端口作為輸入
input {
    beats {
        port => "5044"
    }
}
# 數據過濾
filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    geoip {
        source => "clientip"
    }
}
# 輸出配置為本機的9200端口,這是ElasticSerach服務的監聽端口
output {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
    }
}

2. 后台啟動Logstash服務:nohup bin/logstash -f default.conf –config.reload.automatic &;

3. 查看啟動日志:tail -f logs/logstash-plain.log,啟動成功的信息如下:

[2018-04-07T10:56:28,143][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2018-04-07T10:56:28,870][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-07T10:56:33,639][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-07T10:56:34,628][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://127.0.0.1:9200/]}}
[2018-04-07T10:56:34,650][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://127.0.0.1:9200/, :path=>"/"}
[2018-04-07T10:56:35,147][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://127.0.0.1:9200/"}
[2018-04-07T10:56:35,245][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-04-07T10:56:35,248][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-04-07T10:56:35,304][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-04-07T10:56:35,333][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-04-07T10:56:35,415][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2018-04-07T10:56:35,786][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/local/work/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-04-07T10:56:36,727][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-04-07T10:56:36,902][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<thread:0x427aed17 run="">"}
[2018-04-07T10:56:36,967][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-04-07T10:56:37,083][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}</thread:0x427aed17>

Kibana

打開Kibana的配置文件/usr/local/work/kibana-6.2.3-linux-x86_64/config/kibana.yml,找到下面這行:

#server.host: "localhost"
#改成如下內容
server.host: "192.168.119.132"

這樣其他電腦就能用瀏覽器訪問Kibana的服務了;

2. 進入Kibana的目錄:/usr/local/work/kibana-6.2.3-linux-x86_64

3. 執行啟動命令:nohup bin/kibana &

4. 查看啟動日志:tail -f nohup.out

5. 以下信息表示啟動成功:

{"type":"log","@timestamp":"2018-04-07T04:44:59Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":3206,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-07T04:44:59Z","tags":["status","plugin:console@6.2.3","info"],"pid":3206,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-07T04:45:01Z","tags":["status","plugin:timelion@6.2.3","info"],"pid":3206,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-07T04:45:01Z","tags":["status","plugin:metrics@6.2.3","info"],"pid":3206,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-07T04:45:01Z","tags":["listening","info"],"pid":3206,"message":"Server running at https://localhost:5601"}
{"type":"log","@timestamp":"2018-04-07T04:45:01Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":3206,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

6. 在瀏覽器訪問https://192.168.119.132:5601,看到如下頁面:

至此,ELK服務啟動成功,接下來我們將業務日志上報上來,需要操作另一台電腦:nginx-server;

防火牆

首先,請關閉nginx-server的防火牆:

 

systemctl stop firewalld.service && systemctl disable firewalld.service

安裝Nginx

在nginx-server上安裝並啟動nginx服務,可以參考《 CentOS7安裝Nginx1.10.1》;

FileBeat

在nginx-server電腦創建目錄/usr/local/work 在/usr/local/work目錄下執行以下命令,下載FileBeat安裝包:

 

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.3-linux-x86_64.tar.gz

3. 解壓:tar -zxvf filebeat-6.2.3-linux-x86_64.tar.gz

4. 打開文件/usr/local/work/filebeat-6.2.3-linux-x86_64/filebeat.yml,找到如下圖的位置:

5. 首先,將上圖綠框中的enabled: false改為enabled: true;

6. 其次,將上圖紅框中的- /var/log/*.log改為- /usr/local/nginx/logs/*.log;

7. 繼續修改filebeat.yml文件,找到下圖兩個紅框中的內容,在每行的左側添加”#”,將這兩行內容注釋掉:

 

8. 繼續修改filebeat.yml文件,找到下圖中的內容:

首先,將上圖紅框中的”#”去掉;

其次,將上圖綠框那一行的左側”#”去掉;

最后,將上圖綠框中的內容從[“localhost:9200”]改為[“192.168.119.132:9200”](連接ElasticSerach);

改好的內容如下圖;

9. 啟動FileBeat:./filebeat -e -c filebeat.yml -d “publish”

至此,FileBeat也啟動成功了,接下來驗證服務;

創建Index Patterns

通過瀏覽器多訪問幾次nginx服務,這樣能多制造一些訪問日志,訪問地址:https://192.168.119.133 訪問Kibana:https://192.168.119.132:5601,點擊左上角的Discover,如下圖紅框,可以看到訪問日志已經被ELK搜集了:

如下圖,輸入logstash-*,點擊”Next step”:

如下圖,選擇Time Filter,再點擊“Create index pattern”:

頁面提示創建Index Patterns成功:

 

點擊左上角的”Discover”按鈕,即可看到最新的日志信息,如下圖:

至此,我們已經可以在ELK上查到Nginx的訪問日志了,接下來將Tomcat的日志也接進來;

安裝和啟動Tomcat

確保nginx-server電腦上已經安裝了JDK8; 在/usr/local/work/目錄下執行以下命令,下載Tomcat:

 

wget https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-7/v7.0.85/bin/apache-tomcat-7.0.85.zip

4. 給腳本賦予可執行權限:chmod a+x /usr/local/work/apache-tomcat-7.0.85/bin/*.sh3. 解壓縮:unzip apache-tomcat-7.0.85.zip

5. 啟動:/usr/local/work/apache-tomcat-7.0.85/bin/startup.sh

6. 瀏覽器訪問:https://192.168.119.133:8080,看到啟動成功,如下圖:

 

 

 

7. 訪問Tomcat提供的example服務的子頁面:https://192.168.119.133:8080/examples/servlets/servlet/RequestInfoExample,如下圖:

 

 

 

至此,Tomcat已經啟動成功,接下來將Tomcat的訪問日志接入ELK;

Tomcat訪問日志接入ELK

打開FileBeat的配置文件/usr/local/work/filebeat-6.2.3-linux-x86_64/filebeat.yml,在”filebeat.prospectors:”下面新增一個配置節點,內容如下:

 

- type: log
  enabled: true
  paths:
    - /usr/local/work/apache-tomcat-7.0.85/logs/localhost_access_log.*.txt

配置好的filebeat.yml有兩個type節點了,如下圖:

 

2. 停掉filebeat服務,再用./filebeat -e -c filebeat.yml -d “publish”命令啟動filebeat服務;

3. 此時在Kibana頁面已經可以搜索到Tomcat的訪問日志,以“RequestInfoExample”作為關鍵詞搜索也能搜到對應的訪問日志:

 

至此,ELK-6.2.3版本的服務和日志上報的搭建已經完成,后續如果還有業務服務器要上報日志,只需按照上述步驟安裝和配置FileBeat即可;

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM