Logstash之filter:
json filter:
input{
stdin{
}
}
filter{
json{
source => "message"
}
}
output{
stdout{
codec => json
}
輸入:
{"name": "CSL", "age": 20}
輸出:
Grok filter:
pattern:
https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
創建一個測試log:
[sky@hadoop1 bin]$ cat spark-test-log.log
05/30/17 17:13:24 INFO StartingSparkmasteratspark
05/30/17 17:13:24 INFO RunningSparkversion1
05/30/17 17:13:25 INFO jetty
創建conf:
input{
file{
path => "/usr/local/logstash-5.6.1/bin/spark-test-log.log"
type => "sparkfile"
start_position => "beginning"
}
}
filter{
grok{
match => ["message", "%{DATE:date} %{TIME:time} %{LOGLEVEL:loglevel} %{WORD:word}"]
}
}
output{
stdout{
codec => rubydebug
}
}
運行結果:
自定義正則表達式:
[sky@hadoop1 patterns]$ cat selfpattern
SKYTIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
修改conf:
input{
file{
path => "/usr/local/logstash-5.6.1/bin/spark-test-log.log"
type => "sparkfile"
start_position => "beginning"
}
}
filter{
grok{
patterns_dir => '/usr/local/logstash-5.6.1/patterns/selfpattern'
match => ["message", "%{DATE:date} %{SKYTIME:time} %{LOGLEVEL:loglevel} %{WORD:word}"]
}
}
output{
stdout{
codec => rubydebug
}
}
輸出結果:
定義多個match:使用,分隔。
測試正則表達式網址:
https://grokdebug.herokuapp.com/