日志分析系統ELK(Elasticsearch+logstash+kibana)快速入門--完善版本


目 錄

ELK stack    3

elk准備環境    3

服務器環境    3

修改兩台服務器的hosts文件    3

下載和安裝公共簽名key    3

jdk安裝,版本高於1.8    3

elasticsearch安裝    4

添加yum倉庫    4

安裝ElasticSearch    4

logstash安裝    4

添加yum倉庫    4

安裝logstash    5

kibana安裝    5

添加yum倉庫    5

安裝kibana    5

管理配置elasticsearch    5

修改ElasticSearch配置文件    5

創建目錄並更改權限    5

啟動elasticsearch    5

查看elasticsearch啟動是否成功    6

elasticsearch交互方法    6

交互的兩種方法    6

安裝head插件顯示索引和分片情況    7

Elasticsearch集群    9

配置另外一個elasticsearch    9

啟動elasticsearch    9

node1,node2配置成集群    9

elasticsearch日志查看    9

查看集群節點狀態    11

elasticsearch監控-kopf插件    11

安裝kopf插件    11

也可以使用bigdesk監控ES    12

Logstash入門學習    13

logstash配置文件學習    15

input插件    15

file輸入    15

output插件    15

filter插件    15

實踐1:收集日志:/var/log/messages    15

收集java日志    17

codec插件    18

logstash收集nginx訪問日志    21

報錯    23

系統syslog日志收集    23

syslog標准輸出配置    23

啟動logstash,然后查看標准輸出    24

修改rsyslog配置文件    24

重啟rsyslog    24

此時查看標准輸出    24

將測試配置正式寫入配置文件    24

寫入測試數據,查看elasticsearch中是否存入數據    26

查看elasticsearch結果    26

logstash監控tcp日志    27

file plugins    28

grok的使用    28

mysql慢查詢日志收集    29

logstash架構設計    30

引入redis到架構中    31

安裝redis    31

logstash標准輸出    32

將redis數據讀出來    32

啟動logstash    33

到elasticsearch中查看數據是否被存儲    33

將前面整個配置寫到redis,然后再從redis讀到elasticsearch    33

kibana介紹    34

下載kibana-4    34

啟動kibana    34

添加nginx-log的索引到kibana中    36

添加system-syslog的索引到kibana中    37

kibana的搜索    38

可視化    38

markdown    38

ELK生產上線    39

 

 

 

ELK stack

 

elasticsearch是基於lucenue

 

ES概念待補充

 

 

elk環境准備

 

本次環境有2台設備

192.168.29.139 elk-node2

192.168.29.140 elk-node1

 

服務器環境

 

[root@elk-node1 ~]# cat /etc/redhat-release

CentOS release 6.4 (Final)

[root@elk-node1 ~]# uname -r

2.6.32-358.el6.x86_64

[root@elk-node1 ~]# uname -m

x86_64

[root@elk-node1 ~]# uname -a

Linux elk-node1 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

防火牆和selinux都已關閉;

[root@elk-node1 ~]# getenforce

Disabled

[root@elk-node1 ~]# /etc/init.d/iptables status

iptables: Firewall is not running.

 

修改兩台服務器的hosts文件

 

[root@elk-node2 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.29.140 elk-node1

192.168.29.139 elk-node2

這一步不做,后面elasticsearch可能啟動不了;

 

下載和安裝公共簽名key

 

rpm –import https://packages.elastic.co/GPG-KEY-elasticsearch

 

jdk安裝,版本高於1.8

 

tar xf jdk-8u101-linux-x64.tar.gz

mv jdk1.8.0_101/ /usr/local/

cd /usr/local/

ln -sv jdk1.8.0_101/ jdk

cat >> /etc/profile.d/java.sh <<EOF

JAVA_HOME=/usr/local/jdk

JAVA_BIN=/usr/local/jdk/bin

JRE_HOME=/usr/local/jdk/jre

PATH=/usr/local/jdk/bin:/usr/local/jdk/jre/bin:$PATH

CLASSPATH=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar

EOF

source /etc/profile.d/java.sh

java -version

 

elasticsearch安裝

 

 

添加yum倉庫

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html

參考安裝文檔

vi /etc/yum.repos.d/elasticsearch.repo

或者直接cat重定向追加

cat >>/etc/yum.repos.d/elasticsearch.repo <<EOF

[elasticsearch-2.x]

name=Elasticsearch repository for 2.x packages

baseurl=http://packages.elastic.co/elasticsearch/2.x/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安裝ElasticSearch

 

yum install -y elasticsearch

 

logstash安裝

 

 

添加yum倉庫

 

cat >>/etc/yum.repos.d/logstash.repo <<EOF

[logstash-2.1]

name=Logstash repository for 2.1.x packages

baseurl=http://packages.elastic.co/logstash/2.1/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安裝logstash

 

yum install -y logstash

 

kibana安裝

 

 

添加yum倉庫

 

cat >>/etc/yum.repos.d/kibana.repo <<EOF

[kibana-4.5]

name=Kibana repository for 4.5.x packages

baseurl=http://packages.elastic.co/kibana/4.5/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安裝kibana

 

yum install -y kibana

 

也可以自行下載安裝:kibana-4.5.4-1.x86_64.rpm 或者源碼編譯安裝

以上elasticsearch、kibana、logstash都已安裝完成;

下面開始配置各個配置文件並啟動ELK;

 

管理配置elasticsearch

 

 

修改ElasticSearch配置文件

 

[root@elk-node2 ~]# grep -n ‘^[a-z]’ /etc/elasticsearch/elasticsearch.yml

17:cluster.name: dongbo_elk

23:node.name: elk-node2

33:path.data: /data/elk

37:path.logs: /var/log/elasticsearch/

43:bootstrap.mlockall: true

54:network.host: 0.0.0.0

58:http.port: 9200

 

創建目錄並更改權限

 

[root@elk-node2 ~]# id elasticsearch

uid=498(elasticsearch) gid=499(elasticsearch) groups=499(elasticsearch)

[root@elk-node2 ~]# mkdir -p /data/elk

[root@elk-node2 ~]# chown -R elasticsearch.elasticsearch /data/elk/

 

啟動elasticsearch

 

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

提示JAVA_HOME

[root@baidu_elk_30 tools]# /etc/init.d/elasticsearch start

which: no java in (/sbin:/usr/sbin:/bin:/usr/bin)

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME

[root@baidu_elk_30 tools]# vi /etc/init.d/elasticsearch

[root@baidu_elk_30 tools]# head -3 /etc/init.d/elasticsearch

#!/bin/sh

#

JAVA_HOME=/usr/local/jdk

[root@baidu_elk_30 tools]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

 

查看elasticsearch啟動是否成功

 

[root@elk-node2 local]# netstat -ntulp|grep java

tcp 0 0 :::9200 :::* LISTEN 2058/java

tcp 0 0 :::9300 :::* LISTEN 2058/java

如果啟動不了,可能是虛擬機內存太小,elasticsearch要求最小256m

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: Can’t start up: not enough memory [FAILED]

修改虛擬機內存為1G,另外注意java版本,版本太低也會啟動失敗;

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

訪問網頁:http://192.168.29.139:9200/

 

elasticsearch交互方法

 

 

交互的兩種方法

 

Java API :

node client

Transport client

RESTful API

Javascript

.NET

php

Perl

Python

Ruby

 

[root@elk-node2 ~]# curl -i -XGET ‘http://192.168.29.139:9200/_count?pretty’ -d ‘{

“query” {

“match_all”:{}

}

}’

HTTP/1.1 200 OK

Content-Type: application/json; charset=UTF-8

Content-Length: 95

{

“count” : 0,

“_shards” : {

“total” : 0,

“successful” : 0,

“failed” : 0

}

}

按照上面的方法插入可能覺得比較麻煩,可以直接安裝個插件,web進行管理;

https://www.elastic.co/guide/en/marvel/current/introduction.html

上面的需要kibana支持;

刪除marvel方法:

[root@elk-node2 plugins]# /usr/share/elasticsearch/bin/plugin remove marvel-agent

 

安裝head插件顯示索引和分片情況

 

[root@elk-node2 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

安裝完后訪問web

http://192.168.29.139:9200/_plugin/head/

 

{

“user”:”dongbos”,

“mesg”:”hello”

}

結果:

{

“_index”: “index-demo”,

“_type”: “test”,

“_id”: “AVaX49D0Yf2GPoBFU2cz“,

“_version”: 1,

“_shards”: {

“total”: 2,

“successful”: 1,

“failed”: 0

},

“created”: true

}

然后獲取剛才插入的數據;

點擊基本查詢,可以看到里面插入了2個文檔,點擊搜索,可以查看到;

 

Elasticsearch集群

 

 

配置另外一個elasticsearch

 

安裝是一樣安裝的,elasticsearch配置文件更改下面樣式:

[root@elk-node1 ~]# grep ‘^[a-z]’ /etc/elasticsearch/elasticsearch.yml

cluster.name: dongbo_elk #這個必須一樣,才能成為一個集群

node.name: elk-node1

path.data: /data/elk

path.logs: /var/log/elasticsearch/

bootstrap.mlockall: true

network.host: 0.0.0.0

http.port: 9200

 

啟動elasticsearch

 

 

node1,node2配置成集群

 

在連接欄里輸入http://192.168.29.140:9200/ 點擊連接,就可將另外一個節點添加進來;

 

elasticsearch日志查看

 

過了一段時間,集群狀態還是未識別,查看日志

[root@elk-node1 ~]# vi /var/log/elasticsearch/dongbo_elk

dongbo_elk_deprecation.log dongbo_elk.log

dongbo_elk_index_indexing_slowlog.log dongbo_elk.log.2016-08-16

dongbo_elk_index_search_slowlog.log

[root@elk-node1 ~]# vi /var/log/elasticsearch/dongbo_elk.log

[2016-08-17 10:48:31,356][INFO ][node ] [elk-node1] stopping …

[2016-08-17 10:48:31,382][INFO ][node ] [elk-node1] stopped

[2016-08-17 10:48:31,382][INFO ][node ] [elk-node1] closing …

[2016-08-17 10:48:31,404][INFO ][node ] [elk-node1] closed

[2016-08-17 10:48:32,592][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

[2016-08-17 10:48:32,597][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory

[2016-08-17 10:48:32,597][WARN ][bootstrap ] This can result in part of the JVM being swapped out.

[2016-08-17 10:48:32,600][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536

[2016-08-17 10:48:32,600][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

解決辦法:

2台服務器上都做操作:

vim /etc/security/limits.conf

末尾追加

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

檢查配置

[root@elk-node2 src]# tail -3 /etc/security/limits.conf

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

另外修改open files 數量

[root@elk-node2 ~]# ulimit -a|grep open

open files (-n) 1024

將ElasticSearch的組播修改為單播模式

vim /etc/elasticsearch/elasticsearch.yml

# action.destructive_requires_name: true #默認開啟組播

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: [“192.168.29.140”, “192.168.29.139”]

這個地方需要注意,有時候網絡不支持多播,或者使用多播不能發現其他節點,就直接改為單播使用,建議使用單播,組播發現節點比較慢;

重啟elasticsearch

[root@elk-node2 src]# /etc/init.d/elasticsearch restart

 

查看集群節點狀態

 

注意看上上面的方框,粗線條帶五角星號的庫為主節點,細線條的為備節點;

 

elasticsearch監控-kopf插件

 

https://github.com/lmenezes/elasticsearch-kopf

 

安裝kopf插件

 

[root@elk-node2 head]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

訪問網頁查看

http://192.168.29.139:9200/_plugin/kopf

 

也可以使用bigdesk監控ES

 

https://github.com/hlstudio/bigdesk

 

bigdesk 目前2.3.5 版本不可使用;

centos6.4 默認的組播發現無法發現,改成單播;

bigdesk 不支持2.1

 

當第一個節點啟動,它會組播發現其他節點,發現集群名字一樣的時候,就會自動加入集群。隨便一個節點都是可以連接的,並不是主節點才可以連接,連接的節點起到的作用只是匯總信息展示

最初可以自定義設置分片的個數,分片一旦設置好,就不可以改變。主分片和副本分片都丟失,數據即丟失,無法恢復,可以將無用索引刪除。有些老索引或者不常用的索引需要定期刪除,否則會導致es資源剩余有限,占用磁盤大,搜索慢等。如果暫時不想刪除有些索引,可以在插件中關閉索引,就不會占用內存了。

 

Logstash入門學習

 

yum安裝logstash,安裝后的目錄在/opt/logstash下;

通過find查看logstash安裝目錄:

[root@elk-node1 tools]# find / -type d -name “logstash”

/opt/logstash

 

啟動一個logstash,-e:在命令行執行;input輸入,stdin標准輸入,是一個插件;output輸出,stdout:標准輸出

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }’

此時只需要靜靜的等待,等一會就會出現下面2行;

Settings: Default filter workers: 1

Logstash startup completed

hello world            #輸入hello world

2016-08-17T09:38:27.155Z elk-node1 hello world        #標准輸出結果

 

使用rubudebug顯示詳細輸出,codec為一種編解碼器

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{ codec =>rubydebug } }’

Settings: Default filter workers: 1

Logstash startup completed

hello world

{

“message” => “hello world”,

“@version” => “1”,

“@timestamp” => “2016-08-17T09:59:06.768Z”,

“host” => “elk-node1”

}

 

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin {} } output { elasticsearch { hosts => [“192.168.29.140:9200”] } }’

Settings: Default filter workers: 1

Logstash startup completed

wangluozhongxin

hahah                #輸入的數據

dongbo

chenxiaoyan

查看elasticsearch中是否有這些數據

在elasticsearch中寫一份,同時在本地輸出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定時備份到遠端一份了。此處使用的保留文本文件三大優勢:1)文本最簡單 2)文本可以二次加工 3)文本的壓縮比最高

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin {} } output { elasticsearch { hosts => [“192.168.29.140:9200”] } stdout { codec => rubydebug} }’

Settings: Default filter workers: 1

Logstash startup completed

shanghai

{

“message” => “shanghai”,

“@version” => “1”,

“@timestamp” => “2016-08-17T10:38:06.332Z”,

“host” => “elk-node1”

}

 

上面的使用適合於我們測試一些數據用,沒有寫入到配置文件中,在生產環境中,我們需要將上面的寫入到配置文件中;

[root@elk-node1 ~]# vi /etc/logstash/conf.d/oneday-logstash.conf

input { stdin { } }

output {

elasticsearch { hosts => [“192.168.29.140:9200”] }

stdout { codec => rubydebug }

}

啟動

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/oneday-logstash.conf

 

 

logstash配置文件學習

 

 

input插件

 

 

file輸入

 

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html

要求格式:

 

file { 

 

 

 path => ... 

 

 

} 

 

可選參數(需要注意幾個)

start_position

默認從最后一行后開始收集,如果想把以前有的日志也收集了,需要配置這個參數,[“beginning”,”end”]

 

 

output插件

 

 

 

filter插件

 

 

 

實踐1:收集日志:/var/log/messages

 

前提日志已經開啟,並收集到日志;

/etc/init.d/rsyslog start

chkconfig rsyslog on

[root@elk-node1 ~]# vi /etc/logstash/conf.d/file.conf

[root@elk-node1 ~]# cat /etc/logstash/conf.d/file.conf

input {

 

file {

path => “/var/log/messages”

type => “system”

start_position => “beginning”

}

}

 

output {

 

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

可以看到已經增加了system-2016-08.17索引

日志數據查看

對於日志比較大可以每天生成一個索引,但是對於每天並沒有什么日志,可以一個月生成一個索引;

對於文件,來說是一行,但是對於logstash是一個事件,可以將多行寫成一個事件

 

 

收集java日志

 

因為沒有tomcat等環境,但是elasticsearch是java環境的,可以直接收集elasticsearch的日志;文件所在目錄:/var/log/elasticsearch/dongbo_elk.log

對於有多個文件,放在同一個目錄里的時候,又創建了索引,此時就出將其他日志放到同一個索引中,此時需要將type進行if判斷,來將輸出日志進行分開建立索引;

[root@elk-node1 ~]# vi /etc/logstash/conf.d/file.conf

[root@elk-node1 ~]# cat /etc/logstash/conf.d/file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

}

 

}

 

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

 

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

}

啟動logstash

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

查看建立的索引,在查看索引里的數據;

 

 

有個問題,在java報錯的日志中,經常有好些行連續的報錯日志,正常查看是連續看到報錯日志,研發可以快速定位錯誤,但是現在在elasticsearch中日志收集是按行收集的,將日志拆分成行收集了,所有我們要將一處報錯放到一個事件中,可以快速看到問題;

 

codec插件

 

格式:

 

input {

 

 

 stdin {

 

 

 codec => multiline {

 

 

 pattern => "pattern, a regexp"

 

 

 negate => "true" or "false"

 

 

 what => "previous" or "next"

 

 

 }

 

 

 }}

 

測試codec中的multiline

[root@elk-node1 log]# cat /etc/logstash/conf.d/codec.conf

input {

stdin {

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

stdout {

codec => “rubydebug”

}

}

啟動logstash查看結果

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/codec.conf

[1]        #紅色字體為手動輸入

[2]

{

“@timestamp” => “2016-08-17T14:32:59.278Z”,

“message” => “[1]”,

“@version” => “1”,

“host” => “elk-node1”

}

[3dasjdljsf

{

“@timestamp” => “2016-08-17T14:33:06.678Z”,

“message” => “[2]”,

“@version” => “1”,

“host” => “elk-node1”

}

sdlfjaldjfa

sdlkfjasdf

sdjlajfl

sdjlfkajdf

sdlfjal

[4]

{

“@timestamp” => “2016-08-17T14:33:15.356Z”,

“message” => “[3dasjdljsfnsdlfjaldjfansdlkfjasdfnsdjlajflnsdjlfkajdfnsdlfjal”,

“@version” => “1”,

“tags” => [

[0] “multiline”

],

“host” => “elk-node1”

}

修改file.conf配置文件:

[root@elk-node1 log]# cat /etc/logstash/conf.d/file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

}

在elasticsearch中查看日志還是不太方便的,后面直接通過kibana進行查看日志

 

logstash收集nginx訪問日志

 

安裝nginx

更改nginx配置文件

log_format json ‘{“@timestamp”:”$time_iso8601″,’

‘”@version”:”1″,’

‘”url”:”$uri”,’

‘”status”:”$status”,’

‘”domain”:”$host”,’

‘”size”:”$body_bytes_sent”,’

‘”responsetime”:”$request_time”,’

‘”ua”:”$http_user_agent”.’

‘}’;

server {

access_log    logs/access_json.log json;

先寫一個測試配置文件,看看能否將日志打印出來;

[root@elk-node1 conf.d]# cat json.conf

input {

file {

path => “/application/nginx/logs/access_json.log”

codec => “json”

}

}

output {

stdout {

codec => “rubydebug”

}

}

啟動測試,刷新網頁,發現可以正常打印日志;

將配置寫入到配合文件中:

[root@elk-node1 conf.d]# cat file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/application/nginx/logs/access_json.log”

codec => json

type => “nginx-log”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

if [type] == “nginx-log” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “nginx-log-%{+YYYY.MM.dd}”

}

}

}

 

報錯

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

Unknown setting ‘pattern’ for file {:level=>:error}

Unknown setting ‘negate’ for file {:level=>:error}

Unknown setting ‘what’ for file {:level=>:error}

Error: Something is wrong with your configuration.

You may be interested in the ‘–configtest’ flag which you can

use to validate logstash’s configuration before you choose

to restart a running system.

這樣的報錯,注意檢查配置文件,特別是{}這樣是否缺少,或者有沒有缺少{}等,總之是配置文件問題;

 

系統syslog日志收集

 

在寫收集日志的配置文件時,我們一般可以先標准輸出測試一下,然后再寫入到正式配置文件中;

 

syslog標准輸出配置

 

[root@elk-node1 ~]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# vi syslog.conf

[root@elk-node1 conf.d]# cat syslog.conf

input {

syslog {

type => “system-syslog”

host => “192.168.29.140”

port => “514”

}

}

output {

stdout {

codec => “rubydebug”

}

}

 

啟動logstash,然后查看標准輸出

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf

查看端口

 

修改rsyslog配置文件

 

[root@elk-node1 conf.d]# vi /etc/rsyslog.conf

*.* @@192.168.29.140:514

 

重啟rsyslog

 

 

此時查看標准輸出

 

 

將測試配置正式寫入配置文件

 

[root@elk-node1 conf.d]# vi file.conf

[root@elk-node1 conf.d]# cat file.conf

input {

syslog {

type => “system-syslog”

host => “192.168.29.140”

port => “514”

}

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

 

file {

path => “/application/nginx/logs/access_json.log”

codec => json

type => “nginx-log”

start_position => “beginning”

}

 

 

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

 

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

 

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

 

if [type] == “nginx-log” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “nginx-log-%{+YYYY.MM.dd}”

}

}

if [type] == “system-syslog” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-syslog-%{+YYYY.MM.dd}”

}

}

}

 

寫入測試數據,查看elasticsearch中是否存入數據

 

[root@elk-node1 conf.d]# logger “hehe1”

[root@elk-node1 conf.d]# logger “hehe2”

[root@elk-node1 conf.d]# logger “hehe3”

[root@elk-node1 conf.d]# logger “hehe4”

[root@elk-node1 conf.d]# logger “hehe5”

[root@elk-node1 conf.d]# logger “hehe6”

[root@elk-node1 conf.d]# logger “hehe7”

[root@elk-node1 conf.d]# logger “hehe8”

 

查看elasticsearch結果

 

 

 

logstash監控tcp日志

 

[root@elk-node1 conf.d]# vi tcp.conf

[root@elk-node1 conf.d]# cat tcp.conf

input {

tcp {

host => “192.168.29.140”

port => “6666”

}

}

output {

stdout {

codec => “rubydebug”

}

}

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf

 

使用nc發送一個文件到192.168.29.140 的6666 端口

[root@elk-node1 ~]# yum install nc -y

[root@elk-node1 ~]# nc 192.168.29.140 6666 < /etc/hosts

[root@elk-node1 ~]# echo “haha” |nc 192.168.29.140 6666

將信息輸入到tcp的偽設備中

[root@elk-node1 ~]# echo “dongbo” > /dev/tcp/192.168.29.140/6666

 

file plugins

 

https://www.elastic.co/guide/en/logstash/current/filter-plugins.html

點擊grok

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

Logstash ships with about 120 patterns by default. You can find them here: https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns. You can add your own trivially. (See the patterns_dir setting)

 

grok的使用

 

[root@elk-node1 /]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# vi grok.conf

[root@elk-node1 conf.d]# cat grok.conf

input {

stdin {}

}

filter {

grok {

match => { “message” => “%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}” }

}

}

output {

stdout {

codec => “rubydebug”

}

}

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/grok.conf

Settings: Default filter workers: 1

Logstash startup completed

55.3.244.1 GET /index.html 15824 0.043        輸入這串,查看輸出結果

{

“message” => “55.3.244.1 GET /index.html 15824 0.043”,

“@version” => “1”,

“@timestamp” => “2016-08-20T18:22:01.319Z”,

“host” => “elk-node1”,

“client” => “55.3.244.1”,

“method” => “GET”,

“request” => “/index.html”,

“bytes” => “15824”,

“duration” => “0.043”

}

這個識別的規則,在安裝logstash就已經被定義好了,具體路徑在:

/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns/

[root@elk-node1 patterns]# ls -l

total 96

-rw-r–r– 1 logstash logstash 1197 Feb 17 2016 aws

-rw-r–r– 1 logstash logstash 4831 Feb 17 2016 bacula

-rw-r–r– 1 logstash logstash 2154 Feb 17 2016 bro

-rw-r–r– 1 logstash logstash 879 Feb 17 2016 exim

-rw-r–r– 1 logstash logstash 9544 Feb 17 2016 firewalls

-rw-r–r– 1 logstash logstash 6007 Feb 17 2016 grok-patterns

-rw-r–r– 1 logstash logstash 3251 Feb 17 2016 haproxy

-rw-r–r– 1 logstash logstash 1339 Feb 17 2016 java

-rw-r–r– 1 logstash logstash 1087 Feb 17 2016 junos

-rw-r–r– 1 logstash logstash 1037 Feb 17 2016 linux-syslog

-rw-r–r– 1 logstash logstash 49 Feb 17 2016 mcollective

-rw-r–r– 1 logstash logstash 190 Feb 17 2016 mcollective-patterns

-rw-r–r– 1 logstash logstash 614 Feb 17 2016 mongodb

-rw-r–r– 1 logstash logstash 9597 Feb 17 2016 nagios

-rw-r–r– 1 logstash logstash 142 Feb 17 2016 postgresql

-rw-r–r– 1 logstash logstash 845 Feb 17 2016 rails

-rw-r–r– 1 logstash logstash 104 Feb 17 2016 redis

-rw-r–r– 1 logstash logstash 188 Feb 17 2016 ruby

 

mysql慢查詢日志收集

 

 

 

 

 

 

使用grok特別占用內存,所有需要使用腳本或其他加工一下在收集;

 

 

 

 

 

 

 

 

 

logstash架構設計

 

 

 

 

引入redis到架構中

 

 

安裝redis

 

[root@elk-node1 ~]# yum install redis -y

或者源碼安裝

yum -y install gcc gcc-c++ libstdc++-devel

cd /home/dongbo/tools/

tar xf redis-3.2.3.tar.gz

cd redis-3.2.3

make MALLOC=jemalloc

make PREFIX=/application/redis-3.2.3 install

ln -sv /application/redis-3.2.3/ /application/redis

echo “export PATH=/application/redis/bin/:$PATH” >>/etc/profile

. /etc/profile

mkdir /application/redis/conf

cp redis.conf /application/redis/conf/

vi /application/redis/conf/redis.conf

[root@elk-node1 ~]# grep ‘^[a-z]’ /application/redis/conf/redis.conf

bind 192.168.29.140

protected-mode yes

 

grep -Ev “^$|#|;” /application/redis/conf/redis.conf

echo “vm.overcommit_memory=1” >>/etc/sysctl.conf

echo 511 > /proc/sys/net/core/somaxconn

sysctl -p

/application/redis/bin/redis-server /application/redis/conf/redis.conf

ps aux|grep redis

 

[root@elk-node1 conf.d]# redis-cli

Could not connect to Redis at 127.0.0.1:6379: Connection refused

Could not connect to Redis at 127.0.0.1:6379: Connection refused

not connected> exit

[root@elk-node1 conf.d]# redis-cli -h 192.168.29.140

192.168.29.140:6379> exit

 

[root@elk-node1 ~]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# cat redis.conf

input {

stdin {}

}

output {

redis {

host => “192.168.29.140”

port => “6379”

db => “100”

data_type => “list”

key => “demo”

}

}

 

 

logstash標准輸出

 

192.168.29.140:6379> info

# Keyspace

db10:keys=1,expires=0,avg_ttl=0

192.168.29.140:6379> select 10

OK

192.168.29.140:6379[10]> keys *

1) “demo”

查看最后一行;

192.168.29.140:6379[10]> LINDEX demo -1

“{“message”:”heke”,”@version”:”1″,”@timestamp”:”2016-08-21T05:25:35.752Z”,”host”:”elk-node1″}”

多輸入點,查看輸入的數量

192.168.29.140:6379[10]> LLEN demo

(integer) 77

 

將redis數據讀出來

 

[root@elk-node1 conf.d]# cat /etc/logstash/conf.d/redis_in.conf

input {

redis {

host => “192.168.29.140”

port => “6379”

db => “10”

data_type => “list”

key => “demo”

}

}

 

output {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “redis_demo-%{+YYYY.MM.dd}”

}

}

 

啟動logstash

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/redis_in.conf

Settings: Default filter workers: 1

Logstash startup completed

查看redis中數據,立馬沒有了;

[root@elk-node1 ~]# redis-cli -h 192.168.29.140

192.168.29.140:6379> LLEN demo

(integer) 0

 

到elasticsearch中查看數據是否被存儲

 

 

 

將前面整個配置寫到redis,然后再從redis讀到elasticsearch

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

kibana介紹

 

 

下載kibana-4

 

[root@elk-node1 tools]# wget https://download.elastic.co/kibana/kibana/kibana-4.5.4-linux-x64.tar.gz

[root@elk-node1 tools]# tar xf kibana-4.5.4-linux-x64.tar.gz

[root@elk-node1 tools]# mv kibana-4.5.4-linux-x64 /usr/local/

[root@elk-node1 tools]# ln -sv /usr/local/kibana-4.5.4-linux-x64/ /usr/local/kibana

[root@elk-node1 tools]# cd /usr/local/kibana/config/

[root@elk-node1 config]# vi kibana.yml     #修改kibana配置文件

[root@elk-node1 config]# grep -i ‘^[a-z]’ kibana.yml

server.port: 5601

server.host: “0.0.0.0”

elasticsearch.url: “http://192.168.29.139:9200”

kibana.index: “.kibana”

 

啟動kibana

 

[root@elk-node1 kibana]# /usr/local/kibana/bin/kibana

log [09:26:56.988] [info][status][plugin:kibana] Status changed from uninitialized to green – Ready

log [09:26:57.040] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow – Waiting for Elasticsearch

log [09:26:57.083] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green – Ready

log [09:26:57.099] [info][status][plugin:markdown_vis] Status changed from uninitialized to green – Ready

log [09:26:57.118] [info][status][plugin:metric_vis] Status changed from uninitialized to green – Ready

log [09:26:57.131] [info][status][plugin:spyModes] Status changed from uninitialized to green – Ready

log [09:26:57.142] [info][status][plugin:elasticsearch] Status changed from yellow to green – Kibana index ready

log [09:26:57.145] [info][status][plugin:statusPage] Status changed from uninitialized to green – Ready

log [09:26:57.159] [info][status][plugin:table_vis] Status changed from uninitialized to green – Ready

log [09:26:57.169] [info][listening] Server running at http://0.0.0.0:5601

訪問網頁:http://192.168.29.140:5601/

 

 

點擊discover,發現沒有日志,可能是因為時間范圍問題;

當我把時間改為一周的時候,日志出現了;

 

添加nginx-log的索引到kibana中

 

點擊菜單欄setting

設置打開默認的索引;

 

 

添加system-syslog的索引到kibana中

 

搜索ssh* ,可查看到什么時候,哪個IP登錄或嘗試登錄服務器

 

kibana的搜索

 

 

 

 

 

 

 

可視化

 

 

markdown

 

## 值班運維人員

* 董波 1525555555

* 老板 1526666666

 

# 快速聯系

http://www.baidu.com

保存

 

 

 

 

 

ELK生產上線

 

1、日志分類

系統日志    rsyslog     logstash syslog插件

訪問日志    nginx     logstash codec json

錯誤日志    file         logstash file+ mulitline

運行日志    file         logstash codec json

設備日志    syslog    logstash syslog 插件

debug日志    file        logstashjson or mulitline

 

2、日志標准化

路徑    固定

格式    盡量json

 

系統日志開始收集—>錯誤日志—>運行日志—–>訪問日志

 

 

 

附別人mysql慢查詢日志收集

使用logstash收集mysql慢查詢日志

 

倒入生產中mysql的slow日志,示例格式如下:

 

# Time: 160108 15:46:14

# User@Host: dev_select_user[dev_select_user] @ [192.168.97.86] Id: 714519

# Query_time: 1.638396 Lock_time: 0.000163 Rows_sent: 40 Rows_examined: 939155

SET timestamp=1452239174;

SELECT DATE(create_time) as day,HOUR(create_time) as h,round(avg(low_price),2) as low_price

FROM t_actual_ad_num_log WHERE create_time>=’2016-01-07′ and ad_num<=10

GROUP BY DATE(create_time),HOUR(create_time);

使用multiline處理,並編寫slow.conf

 

[root@linux-node1 ~]# cat mysql-slow.conf

input{

file {

path => “/root/slow.log”

type => “mysql-slow-log”

start_position => “beginning”

codec => multiline {

pattern => “^# User@Host:”

negate => true

what => “previous”

}

}

}

filter {

# drop sleep events

grok {

match => { “message” =>”SELECT SLEEP” }

add_tag => [ “sleep_drop” ]

tag_on_failure => [] # prevent default _grokparsefailure tag on real records

}

if “sleep_drop” in [tags] {

drop {}

}

grok {

match => [ “message”, “(?m)^# User@Host: %{USER:user}[[^]]+] @ (?:(?<clienthost>S*) )?[(?:%{IP:clientip})?]s+Id: %{NUMBER:row_id:int}s*# Query_time: %{NUMBER:query_time:float}s+Lock_time: %{NUMBER:lock_time:float}s+Rows_sent: %{NUMBER:rows_sent:int}s+Rows_examined: %{NUMBER:rows_examined:int}s*(?:use %{DATA:database};s*)?SET timestamp=%{NUMBER:timestamp};s*(?<query>(?<action>w+)s+.*)n#s*” ]

}

date {

match => [ “timestamp”, “UNIX” ]

remove_field => [ “timestamp” ]

}

}

output {

stdout{

codec => “rubydebug”

}

}

 

 

 

 

 

部分報錯解決:

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

which: no java in (/sbin:/usr/sbin:/bin:/usr/bin)

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME

解決:沒有安裝java;

yum install java -y 解決問題

 

報錯:

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: Can’t start up: not enough memory [FAILED]

解決:

查看java版本,低於1.8

[root@elk-node2 etc]# java -version

java version “1.5.0”

gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-17)

 

tar xf jdk-8u101-linux-x64.tar.gz

mv jdk1.8.0_101/ /usr/local/

cd /usr/local/

ln -sv jdk1.8.0_101/ jdk

vi /etc/profile.d/java.sh

JAVA_HOME=/usr/local/jdk

JAVA_BIN=/usr/local/jdk/bin

JRE_HOME=/usr/local/jdk/jre

PATH=$PATH:/usr/local/jdk/bin:/usr/local/jdk/jre/bin

CLASSPATH=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar

 

source /etc/profile.d/java.sh

[root@elk-node2 local]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

 

 

 

 

建議自行下載安裝logstash-2.1.3-1.noarch.rpm

 

 

chkconfig –add kibana

啟動kibana

/etc/init.d/kibana start

 

查看kibana是否正常啟動

[root@elk-node2 src]# ps aux|grep kibana|grep -v grep

kibana 2898 28.2 9.8 1257092 99516 pts/0 Sl 06:21 0:03 /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/cli

[root@elk-node2 bin]# netstat -ntulp|grep 5601

tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 2898/node

 

 

監控一個java程序的日志:

創建模擬數據:

vi /tmp/test.log

Caller+1 at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:115)

{“startTime”:1459095505006,”time”:5592,”arguments”:[{“businessLicNum”:null,”optLock”:null,”phone”:”18511451798″,”overdueRate”:0.02,”schoolName”:null,”mobileIncome”:null,”macIos”:null,”sourceFrom”:null,”password”:null,”employedDate”:null,”city”:”成都”,”username”:null,”vocation”:”工程師”,”QQ”:null,”isApplyFinish”:1,”idfaIos”:null,”longitude”:null,”openid”:null,”iosAndroidIp”:null,”verifyAmount”:null,”deviceId”:null,”cashQuota”:5000.0,”enteroriseName”:null,”iostoken”:null,”channelId”:null,”channelCustId”:null,”idcard”:”420116198508233317″,”code”:null,”iosAndroidId”:null,”companyName”:”lx”,”talkingDeviceId”:null,”onlineStoreName”:null,”schoolAble”:null,”appversionAd”:null,”businessCircle”:null,”appVersion”:null,”email”:”lx@lx.com”,”inviteCode”:null,”latitude”:null,”rrUrl”:null,”xlUrl”:null,”sex”:”0″,”sourceMark”:”adr”,”registerDate”:”2015-08-12 17:16:32″,”businessTime”:null,”mac”:null,”mainBusiness”:null,”couponCodeId”:null,”electricPlatform”:null,”id”:1764155,”bankVerify”:null,”name”:”lx”,”independentPassword”:”e10adc3949ba59abbe56e057f20f883e”,”adrToken”:null,”picCode”:null,”examineAmount”:5000.0,”payPassword”:null,”customerType”:1,”adpromoteFrom”:null,”wxId”:null,”prevDate”:null,”isTkOn”:null,”baitiao”:1,”isOpenidEnable”:null,”logout”:0,”newDate”:null,”monthIncome”:null,”address”:null,”regeditNum”:null,”monthRate”:0.0,”majorName”:null,”versionIos”:null,”admissionTime”:null}],”monitorKey”:”lx:investorController:listInvestor:4B1EC75C25D55FC0_20160328121824″,”service”:”com.lx.business.investor.service.api.InvestorService”,”method”:”listByCustomer”}

 

日志收集方式,logstash->redis->elasticsearch

 

 

在應用服務器,編寫shipper.conf


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM