5.通過logstash收集日志(一)


1.收集單個系統日志並輸出至文件

前提需要logstash用戶對被收集的日志文件有讀的權限並對寫入的文件有寫權限

修改logstash的配置文件

path.config: /etc/logstash/conf.d/*

  

1.1logstash配置文件

[root@linux-node1 ~]# cat /etc/logstash/conf.d/system-log.conf
input {
  file {
    type => "messagelog"
    path => "/var/log/messages"
    start_position => "beginning"

  }
}

output { 
  file {
    path => "/tmp/%{type}.%{+yyyy.MM.dd}"
  }
}

1.2檢測配置文件語法是否正確(時間有點長)

/usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/system-log.conf  -t

友情提示,后面的配置文件問好自己手打,(補全),不然會報找不到文件的哦

systemctl restart logstash.service # 修改配置文件就重啟服務,此處是測試,博主使用了restart,生產請用reload

 1.3生成數據並驗證

echo 123 >> /var/log/messages # 好像啥事也沒放生

1.4 查看日志是重中之重

 tail /var/log/logstash/logstash-plain.log

[root@linux-node1 logstash]# tailf logstash-plain.log 
[2017-11-07T11:54:07,310][INFO ][logstash.pipeline        ] Pipeline main started
[2017-11-07T11:54:07,454][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-07T11:58:10,173][WARN ][logstash.runner          ] SIGTERM received. Shutting down the agent.
[2017-11-07T11:58:10,176][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
[2017-11-07T11:58:23,890][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2017-11-07T11:58:23,893][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2017-11-07T11:58:24,172][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-11-07T11:58:24,401][WARN ][logstash.inputs.file     ] failed to open /var/log/messages: Permission denied - /var/log/messages
[2017-11-07T11:58:24,413][INFO ][logstash.pipeline        ] Pipeline main started
[2017-11-07T11:58:24,459][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

授權文件

chmod  644 /var/log/messages
[root@linux-node1 logstash]# ls -l /tmp/messagelog.2017.11.07 
-rw-r--r--. 1 logstash logstash 2899901 Nov  7 12:03 /tmp/messagelog.2017.11.07

2.通過logstash收集多個日志文件

 

[root@linux-node1 logstash]# cat /etc/logstash/conf.d/system-log.conf 
input {
  file {
    path => "/var/log/messages" #日志路徑
    type => "systemlog" #事件的唯一類型
    start_position => "beginning" #第一次收集日志的位置
    stat_interval => "3" #日志收集的間隔時間
  }
  file {
    path => "/var/log/secure"
    type => "securelog"
    start_position => "beginning"
    stat_interval => "3"
  }
}

output { 
  if [type] == "systemlog" {
    elasticsearch {
      hosts => ["192.168.56.11:9200"]
      index => "system-log-%{+YYYY.MM.dd}"
    }}
  if [type] == "securelog" {
    elasticsearch {
      hosts => ["192.168.56.11:9200"]
      index => "secury-log-%{+YYYY.MM.dd}"
    }}    
}
[root@linux-node1 logstash]# systemctl restart logstash.service # 必須重啟服務

 授權

chmod 644 /var/log/secure

 

 echo "test" >> /var/log/secure
 echo "test" >> /var/log/messages

網站192.168.56.11:9100查看索引,已經存在啦。

 2.1在kibana界面添加system-log索引

 

 

 

3.通過logtsash收集tomcatjava日志

 收集Tomcat服務器的訪問日志以及Tomcat錯誤日志進行實時統計,在kibana頁面進行搜索展現,每台Tomcat服務器要安裝logstash負責收集日志,然后將日志轉發給elasticsearch進行分析,在通過kibana在前端展現,配置過程如下

3.1服務器部署tomcat服務

 需要安裝java環境,並自定義一個web界面進行測試

yum install jdk-8u121-linux-x64.rpm
cd /usr/local/src
wget http://mirrors.shuosc.org/apache/tomcat/tomcat-8/v8.5.23/bin/apache-tomcat-8.5.23.tar.gz
tar xf apache-tomcat-8.5.23.tar.gz
mv apache-tomcat-8.5.23 /usr/local/
ln -s /usr/local/apache-tomcat-8.5.23/ /usr/local/tomcat
cd /usr/local/tomcat/webapps/
mkdir /usr/local/tomcat/webapps/webdir
echo "Tomcat Page" > /usr/local/tomcat/webapps/webdir/index.html
../bin/catalina.sh  start
#[root@linux-node1 webapps]# netstat -plunt | grep 8080
#tcp6       0      0 :::8080                 :::*                    LISTEN      19879/java 

查看頁面是否能訪問

3.2tomcat日志轉json

 vim /usr/local/tomcat/conf/server.xml

 <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="tomcat_access_log" suffix=".log"
               pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/> 

  

./bin/catalina.sh  stop
 rm -rf  /usr/local/tomcat/logs/*
 ./bin/catalina.sh  start 
#tailf /usr/local/tomcat/logs/tomcat_access_log.2017-11-07.log

驗證日志是否json格式

 http://www.kjson.com/

如何獲取日志行中的IP?

這個我就不教啦。出門左拐學個python基礎或者js基礎都能獲取到。

python叫字典,javascripts叫對象。

3.3在tomcat服務器安裝logstash收集tomcat和系統日志:

需要部署tomcat並安裝配置logstash

# cat /etc/logstash/conf.d/tomcat.conf 
input {
  file {
    path => "/usr/local/tomcat/logs/tomcat_access_log.*.txt"  # 注意文件名
    start_position => "end"
    type => "tomct-access-log"
  }
  file { 
    path => "/var/log/messages"
    start_position => "end"
    type => "system-log"
 }
}

output {
  if [type] == "tomct-access-log" {
    elasticsearch {
      hosts => ["192.168.56.11:9200"]
      index => "logstash-tomcat-5616-access-%{+YYYY.MM.dd}"
      codec => "json"
  }}

  if [type] == "system-log" {
    elasticsearch {
      hosts => ["192.168.56.12:9200"] #寫入到不通的ES服務器
      index => "system-log-5616-%{+YYYY.MM.dd}"
}}
}

3.4重啟logstash並確認

systemctl  restart logstash 
tail  -f /var/log/logstash/logstash-plain.log #驗證日志
chmod  644 /var/log/messages  #修改權限
systemctl  restart logstash #再次重啟logstash

 

3.5訪問tomcat並生成日志

echo "2017-11-07" >> /var/log/messages

 

tomcat重啟通常使用kill -9強殺。別問我為什么。博主對tomcat不熟/

 

接下來的博主就用復制粘貼,表示不錯測試了。 以下的版本是5.4的ELK

在kibana添加logstash-tomcat-5616-access-:

 

 

4.3.2.6:在kibana添加system-log-5616-:

 

 

4.3.2.7:驗證數據:

 

 

4.3.2.8:在其它服務器使用ab批量訪問並驗證數據:

[root@linux-host3 ~]# yum install httpd-tools –y

[root@linux-host3 ~]# ab -n1000 -c100 http://192.168.56.16:8080/webdir/

  

 

 

 

4.3.3:收集java日志:

使用codec的multiline插件實現多行匹配,這是一個可以將多行進行合並的插件,而且可以使用what指定將匹配到的行與前面的行合並還是和后面的行合並,https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

4.3.3.1:在elasticsearch服務器部署logstash:

[root@linux-host1 ~]# chown  logstash.logstash /usr/share/logstash/data/queue -R

[root@linux-host1 ~]# ll -d /usr/share/logstash/data/queue

drwxr-xr-x 2 logstash logstash 6 Apr 19 20:03 /usr/share/logstash/data/queue

[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf

input {

        stdin {

        codec => multiline {

        pattern => "^\[" #當遇到[開頭的行時候將多行進行合並

        negate => true  #true為匹配成功進行操作,false為不成功進行操作

        what => "previous"  #與上面的行合並,如果是下面的行合並就是next

        }}

}

filter { #日志過濾,如果所有的日志都過濾就寫這里,如果只針對某一個過濾就寫在input里面的日志輸入里面

}

output {

        stdout {

        codec => rubydebug

}}

  

4.3.3.2:測試可以正常啟動:

[root@linux-host1 ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf

  

 

 

4.3.3.3:測試標准輸入和標准輸出:

 

 

4.3.3.4:配置讀取日志文件寫入到文件:

[root@linux-host1 ~]# vim /etc/logstash/conf.d/java.conf

input {

  file {

    path => "/elk/logs/ELK-Cluster.log"

    type => "javalog"

    start_position => "beginning"

    codec => multiline {

    pattern => "^\["

    negate => true

    what => "previous"

  }}

}

 

output {

  if [type] == "javalog" {

  stdout {

      codec => rubydebug

    }

  file {

    path =>  "/tmp/m.txt"

  }}

}

  

4.3.3.5:語法驗證:

[root@linux-host1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf  -t

 

4.3.3.7:將輸出改為elasticsearch:

更改后的內容如下:

[root@linux-host1 ~]# cat /etc/logstash/conf.d/java.conf

input {

  file {

    path => "/elk/logs/ELK-Cluster.log"

    type => "javalog"

    start_position => "beginning"

    codec => multiline {

    pattern => "^\["

    negate => true

    what => "previous"

  }}

}

 

output {

  if [type] == "javalog" {

  elasticsearch {

    hosts =>  ["192.168.56.11:9200"]

    index => "javalog-5611-%{+YYYY.MM.dd}"

  }}

}

[root@linux-host1 ~]# systemctl  restart logstash

然后重啟一下elasticsearch服務,目前是為了生成新的日志,以驗證logstash能否自動收集新生成的日志。

[root@linux-host1 ~]# systemctl  restart elasticsearch

  

4.3.3.8:kibana界面添加javalog-5611索引:

 

 

4.3.3.9:生成數據:

[root@linux-host1 ~]# cat  /elk/logs/ELK-Cluster.log  >> /tmp/1

[root@linux-host1 ~]# cat /tmp/1  >> /elk/logs/ELK-Cluster.log

  

4.3.3.10:kibana界面查看數據:

 

 

4.3.3.11:關於sincedb:

[root@linux-host1~]# cat /var/lib/logstash/plugins/inputs/file/.sincedb_1ced15cfacdbb0380466be84d620085a

134219868 0 2064 29465 #記錄了收集文件的inode信息

[root@linux-host1 ~]# ll -li /elk/logs/ELK-Cluster.log

134219868 -rw-r--r-- 1 elasticsearch elasticsearch 29465 Apr 21 14:33 /elk/logs/ELK-Cluster.log

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM