1、logstash過濾器插件filter
本文轉載於https://www.cnblogs.com/FengGeBlog/p/10305318.html
1.1、grok正則捕獲
grok是一個十分強大的logstash filter插件,他可以通過正則解析任意文本,將非結構化日志數據弄成結構化和方便查詢的結構。他是目前logstash 中解析非結構化日志數據最好的方式
grok的語法規則是:
%{語法:語義}
“語法”指的是匹配的模式。例如使用NUMBER模式可以匹配出數字,IP模式則會匹配出127.0.0.1這樣的IP地址。
例如:
我們的試驗數據是:
172.16.213.132 [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039
1)我們舉個例子來講解過濾IP
input {
stdin {
}
}
filter{
grok{
match => {"message" => "%{IPV4:ip}"}
}
}
output {
stdout {
}
}
現在啟動一下:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l2.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 [07/Feb/2018:16:24:19 +0800]"GET /HTTP/1.1" 403 5039 #手動輸入此行信息
{
"message" => "172.16.213.132 [07/Feb/2018:16:24:19 +0800]\"GET /HTTP/1.1\" 403 5039",
"ip" => "172.16.213.132",
"@version" => "1",
"host" => "ip-172-31-22-29.ec2.internal",
"@timestamp" => 2019-01-22T09:48:15.354Z
}
2)舉個例子來講解過濾時間戳
input與output字段信息這里省略不寫了。
filter{
grok{
match => {"message" => "%{IPV4:ip}\ \[%{HTTPDATE:timestamp}\]"}
}
}
接下來我們過濾一下:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l2.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 [07/Feb/2018:16:24:19 +0800]"GET /HTTP/1.1" 403 5039 手動輸入此行信息
{
"@version" => "1",
"timestamp" => "07/Feb/2018:16:24:19 +0800",
"@timestamp" => 2019-01-22T10:16:14.205Z,
"message" => "172.16.213.132 [07/Feb/2018:16:24:19 +0800]\"GET /HTTP/1.1\" 403 5039",
"ip" => "172.16.213.132",
"host" => "ip-172-31-22-29.ec2.internal"
}
可以看到我們已經過濾成功了,在配置文件中grok其實是使用正則表達式來進行過濾的。我們做個小實驗,比如我現在在例子中的數據ip后面添加兩個“-”。如圖所示:
172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039
那么此時在配置文件中我就需要這樣子來寫:
filter{
grok{
match => {"message" => "%{IPV4:ip}\ -\ -\ \[%{HTTPDATE:timestamp}\]"}
}
}
那么此時在match行我就要匹配兩個“-”,否則grok就不能正確匹配數據,從而不能解析數據。
啟動一下來查看一下結果:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l2.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行內容,然后按下enter鍵。
{
"@timestamp" => 2019-01-22T10:25:46.687Z,
"ip" => "172.16.213.132",
"message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] \"GET /HTTP/1.1\" 403 5039",
"timestamp" => "07/Feb/2018:16:24:19 +0800",
"@version" => "1",
"host" => "ip-172-31-22-29.ec2.internal"
}
這時候我們就得到了信息,我這里是匹配IP和時間,當然你也可以直接匹配時間即可:
filter{
grok{
match => {"message" => "\ -\ -\ \[%{HTTPDATE:timestamp}\]"}
}
}
這個時候我們更加能理解grok使用正則匹配數據了。
需要注意的是:正則中,匹配空格和中括號要加上轉義符。
3)過濾出報文頭信息
首先來寫匹配的正則模式
filter{
grok{
match => {"message" => "\ %{QS:referrer}\ "}
}
}
啟動一下看看結果:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l2.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039
{
"@timestamp" => 2019-01-22T10:47:37.127Z,
"message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] \"GET /HTTP/1.1\" 403 5039",
"@version" => "1",
"host" => "ip-172-31-22-29.ec2.internal",
"referrer" => "\"GET /HTTP/1.1\""
}
4)舉一反三,我們嘗試輸出一下/var/log/message字段的時間信息。
例子的數據:
Jan 20 11:33:03 ip-172-31-22-29 systemd: Removed slice User Slice of root.
我們的目的是輸出時間,也就是前三列而已。
這個時候我們可以去找匹配的正則有哪些,要去這個路徑下找:/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns目錄下的grok-patterns這個文件,我們發現了這個:
正好非常符合上面輸出的信息。
首先寫好配置文件
filter{
grok{
match => {"message" => "%{SYSLOGTIMESTAMP:time}"}
remove_field => ["message"]
}
}
啟動一下看看情況:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l4.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Jan 20 11:33:03 ip-172-31-22-29 systemd: Removed slice User Slice of root. #手動輸入此行信息。
{
"@timestamp" => 2019-01-22T11:54:26.646Z,
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1",
"time" => "Jan 20 11:33:03"
}
看到結果已經轉換成功了,非常好用的工具。
1.2、date插件
在上面我們有個例子是講解timestamp字段,表示取出日志中的時間。但是在顯示的時候除了顯示你指定的timestamp外,還有一行是@timestamp信息,這兩個時間是不一樣的,@timestamp表示系統當前時間。兩個時間並不是一回事,在ELK的日志處理系統中,@timestamp字段會被elasticsearch用到,用來標注日志的生產時間,如此一來,日志生成時間就會發生混亂,要解決這個問題,需要用到另一個插件,即date插件,這個時間插件用來轉換日志記錄中的時間字符串,變成Logstash::Timestamp對象,然后轉存到@timestamp字段里面
接下來我們在配置文件中配置一下:
filter{
grok{
match => {"message" => "\ -\ -\ \[%{HTTPDATE:timestamp}\]"}
}
date{
match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
}
注意:時區偏移量需要用一個字母Z來轉換。還有這里的“dd/MMM/yyyy”,你發現中間是三個大寫的M,沒錯,這里確實是三個大寫的M,我嘗試只寫兩個M的話,轉換失敗
啟動一下我們看看效果:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l2.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行信息
{
"host" => "ip-172-31-22-29.ec2.internal",
"timestamp" => "07/Feb/2018:16:24:19 +0800",
"@timestamp" => 2018-02-07T08:24:19.000Z,
"message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] \"GET /HTTP/1.1\" 403 5039",
"@version" => "1"
}
會發現@timestamp時間轉換成功,因為我寫這篇博客是在2019年1月22日寫的。還有一點就是在時間少8個小時,你發現了嗎?繼續往下看
1.3、remove_field的用法
remove_field的用法也是很常見的,他的作用就是去重,在前面的例子中你也看到了,不管是我們要輸出什么樣子的信息,都是有兩份數據,即message里面是一份,HTTPDATE或者IP里面也有一份,這樣子就造成了重復,過濾的目的就是篩選出有用的信息,重復的不要,因此我們看看如何去重呢?
1)我們還是以輸出IP為例:
filter{
grok{
match => {"message" => "%{IP:ip_address}"}
remove_field => ["message"]
}
}
啟動服務查看一下:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l5.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行內容並按enter鍵
{
"ip_address" => "172.16.213.132",
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1",
"@timestamp" => 2019-01-22T12:16:58.918Z
}
這時候你會發現沒有之前顯示的那個message的那一行信息了。因為我們使用remove_field把他移除了,這樣的好處顯而易見,我們只需要日志中特定的信息而已。
2)在上面的幾個例子中我們是把message一行的信息一個一個分開演示了,現在我想在一個logstash中全部顯示出來。
我們先在配置文件中配置一下:
filter{
grok{
match => {"message" => "%{IP:ip_address}\ -\ -\ \[%{HTTPDATE:timestamp}\]\ %{QS:referrer}\ %{NUMBER:status}\ %{NUMBER:bytes}"}
}
date{
match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
}
啟動一下,看看情況:
[root@172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l5.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行內容
{
"status" => "403",
"bytes" => "5039",
"message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039",
"ip_address" => "172.16.213.132",
"timestamp" => "07/Feb/2018:16:24:19 +0800",
"@timestamp" => 2018-02-07T08:24:19.000Z,
"referrer" => ""GET /HTTP/1.1"",
"@version" => "1",
"host" => "ip-172-31-22-29.ec2.internal"
}
在這個例子中,你能感受到輸出內容的臃腫,相當於輸出了兩份的內容,因此我們很有必要將原始內容message的這一行給去掉。
3)使用remove_field去掉message這一行的信息。
首先我們修改一下配置文件:
filter{
grok{
match => {"message" => "%{IP:ip_address}\ -\ -\ \[%{HTTPDATE:timestamp}\]\ %{QS:referrer}\ %{NUMBER:status}\ %{NUMBER:bytes}"}
}
date{
match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate{
remove_field => ["message","timestamp"]
}
啟動一下看看:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l5.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行內容嘗試一下
{
"referrer" => "\"GET /HTTP/1.1\"",
"bytes" => "5039",
"host" => "ip-172-31-22-29.ec2.internal",
"@timestamp" => 2018-02-07T08:24:19.000Z,
"status" => "403",
"ip_address" => "172.16.213.132",
"@version" => "1"
}
看到了嗎這就是我們想要的最終結果
1.4、時間處理(date)
上面有幾個例子已經講到了date的用法。date插件對於排序事件和回填舊數據尤其重要,它可以用來轉換日志記錄中的時間字段,變成Logstash::timestamp對象,然后轉存到@timestamp字段里面。
為什么要使用這個插件呢?
1、一方面由於Logstash會給收集到的每條日志自動打上時間戳(即@timestamp),但是這個時間戳記錄的是input接收數據的時間,而不是日志生成的時間(因為日志生成時間與input接收的時間肯定不同),這樣就可能導致搜索數據時產生混亂。
2、另一方面,在上面那段rubydebug編碼格式的輸出中,@timestamp字段雖然已經獲取了timestamp字段的時間,但是仍然比北京時間晚了8個小時,這是因為在Elasticsearch內部,對時間類型字段都是統一采用UTC時間,而日志統一采用UTC時間存儲,是國際安全、運維界的一個共識。其實這並不影響什么,因為ELK已經給出了解決方案,那就是在Kibana平台上,程序會自動讀取瀏覽器的當前時區,然后在web頁面自動將UTC時間轉換為當前時區的時間。
如果你要解析你的時間,你要使用字符來代替,用於解析日期和時間文本的語法使用字母來指示時間(年、月、日、時、分等)的類型。以及重復的字母來表示該值的形式。在上面看到的"dd/MMM/yyy:HH:mm:ss Z",他就是使用這種形式,我們列出字符的含義:
那我們是依據什么寫出“dd/MMM/yyy:HH:mm:ss Z”這樣子的形式的呢?
這一點不好理解,給大家盡量說清楚。比如上面的試驗數據是
172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039
現在我們想轉換時間,那就要寫出"dd/MMM/yyy:HH:mm:ss Z",你發現中間有三個M,你要是寫出兩個就不行了,因為我們查表發現兩個大寫的M表示兩位數字的月份,可是我們要解析的文本中,月份則是使用簡寫的英文,所以只能去找三個M。還有最后為什么要加上個大寫字母Z,因為要解析的文本中含有“+0800”時區偏移,因此我們要加上去,否則filter就不能正確解析文本數據,從而轉換時間戳失敗。
1.5、數據修改mutate插件
mutate插件是logstash另一個非常重要的插件,它提供了豐富的基礎類型數據處理能力,包括重命名、刪除、替換、修改日志事件中的字段。我們這里舉幾個常用的mutate插件:字段類型轉換功能covert、正則表達式替換字段功能gsub、分隔符分隔字符串為數值功能split、重命名字段功能rename、刪除字段功能remove_field
1)字段類型轉換convert
先修改配置文件:
filter{
grok{
match => {"message" => "%{IPV4:ip}"}
remove_field => ["message"]
}
mutate{
convert => ["ip","string"]
}
}
或者這樣子寫也行,寫法區別較小:
filter{
grok{
match => {"message" => "%{IPV4:ip}"}
remove_field => ["message"]
}
mutate{
convert => {
"ip" => "string"
}
}
}
現在我們啟動服務查看一下效果:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l6.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties172.16.213.132 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039
{
"@timestamp" => 2019-01-23T04:13:55.261Z,
"ip" => "172.16.213.132",
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1"
}
在這里的ip行中,效果可能不太明顯,但是確實是已經轉化成string模式了。
2)正則表達式替換匹配字段
gsub可以通過正則表達式替換字段中匹配到的值,但是這本身只對字符串字段有效。
首先把修改配置文件看看
filter{
grok{
match => {"message" => "%{QS:referrer}"}
remove_field => ["message"]
}
mutate{
gsub => ["referrer","/","-"]
}
}
啟動一下看看效果:
172.16.213.132 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039
{
"host" => "ip-172-31-22-29.ec2.internal",
"@timestamp" => 2019-01-23T05:51:30.786Z,
"@version" => "1",
"referrer" => "\"GET -HTTP-1.1\""
}
很不錯,確實對QS的部分的分隔符換做橫杠了
3)分隔符分隔字符串為數組
split可以通過指定的分隔符分隔字段中的字符串為數組。
首先配置文件
filter{
mutate{
split => ["message","-"]
add_field => ["A is lower case :","%{[message][0]}"]
}
}
這里的意思是對一個字段按照“-”進行分隔為數組
啟動一下:
a-b-c-d-e-f-g #手動輸入此行內容,並按下enter鍵。
{
"A is lower case :" => "a",
"message" => [
[0] "a",
[1] "b",
[2] "c",
[3] "d",
[4] "e",
[5] "f",
[6] "g"
],
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1",
"@timestamp" => 2019-01-23T06:07:18.062Z
}
4)重命名字段
rename可以實現重命名某個字段的功能。
filter{
grok{
match => {"message" => "%{IPV4:ip}"}
remove_field => ["message"]
}
mutate{
convert => {
"ip" => "string"
}
rename => {
"ip"=>"IP"
}
}
}
rename字段使用大括號{}括起來,其實我們也可以使用中括號達到同樣的目的
mutate{
convert => {
"ip" => "string"
}
rename => ["ip","IP"]
}
啟動后檢查一下:
172.16.213.132 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此內容
{
"@version" => "1",
"@timestamp" => 2019-01-23T06:20:21.423Z,
"host" => "ip-172-31-22-29.ec2.internal",
"IP" => "172.16.213.132"
}
5)刪除字段,這個不多說,我們上面已經有例子了。
6)添加字段add_field。
添加字段多用於split分隔中,主要是對split分隔后的字段中指定格式輸出。
filter {
mutate {
split => ["message", "|"]
add_field => {
"timestamp" => "%{[message][0]}" } }}
添加字段后,該字段會與@timestamp一樣同等格式顯示出來。
1.6、geoip地址查詢歸類
geoip是常見的免費的IP地址歸類查詢庫,geoip可以根據IP地址提供對應的地域信息,包括國別,省市,經緯度等等,此插件對於可視化地圖和區域統計非常有用。
首先我們修改一下配置文件來看看
filter{
grok {
match => {
"message" => "%{IP:ip}"
}
remove_field => ["message"]
}
geoip {
source => "ip"
}
}
中間match的部分也可以替換成下圖例子:
grok {
match => ["message","%{IP:ip}"]
remove_field => ["message"]
}
啟動一下看看效果:
[root@:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l7.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
114.55.68.111 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行信息
{
"ip" => "114.55.68.111",
"geoip" => {
"city_name" => "Hangzhou",
"region_code" => "33",
"location" => {
"lat" => 30.2936,
"lon" => 120.1614
},
"longitude" => 120.1614,
"latitude" => 30.2936,
"country_code2" => "CN",
"timezone" => "Asia/Shanghai",
"ip" => "114.55.68.111",
"country_code3" => "CN",
"continent_code" => "AS",
"country_name" => "China",
"region_name" => "Zhejiang"
},
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1",
"@timestamp" => 2019-01-23T06:47:51.200Z
}
成功了。
但是上面的內容並不是每個都是我們想要的,因此我們可以選擇性的輸出。
繼續修改內容如下:
filter{
grok {
match => ["message","%{IP:ip}"]
remove_field => ["message"]
}
geoip {
source => ["ip"]
target => ["geoip"]
fields => ["city_name","region_name","country_name","ip"]
}
}
啟動一下看看:
114.55.68.111 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039 #手動輸入此行信息
{
"@timestamp" => 2019-01-23T06:57:29.955Z,
"ip" => "114.55.68.111",
"geoip" => {
"city_name" => "Hangzhou",
"ip" => "114.55.68.111",
"country_name" => "China",
"region_name" => "Zhejiang"
},
"@version" => "1",
"host" => "ip-172-31-22-29.ec2.internal"
}
發現輸出的內容果然變少了,我們想輸出什么他就輸出什么內容。
1.7、filter插件綜合應用。
我們的業務例子如下所示:
112.195.209.90 - - [20/Feb/2018:12:12:14 +0800] "GET / HTTP/1.1" 200 190 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36" "-"
日志中的雙引號、單引號、中括號等不能被正則解析的都要加上轉義符號,詳情可見這里:https://www.cnblogs.com/ysk123/p/9858387.html
現在我們修改配置文件進行匹配
filter{
grok {
match => ["message","%{IPORHOST:client_ip}\ -\ -\ \[%{HTTPDATE:timestamp}\]\ %{QS:referrer}\ %{NUMBER:status}\ %{NUMBER:bytes}\ \"-\"\ \"%{DATA:browser_info}\ %{GREEDYDATA:extra_info}\"\ \"-\""]
}
geoip {
source => ["client_ip"]
target => ["geoip"]
fields => ["city_name","region_name","country_name","ip"]
}
date {
match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate {
remove_field => ["message","timestamp"]
}
}
然后啟動一下看看效果:
[root@:vg_adn_tidbCkhsTest:23.22.172.65:172.31.22.29 /etc/logstash/conf.d]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/l9.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
112.195.209.90 - - [20/Feb/2018:12:12:14 +0800] "GET / HTTP/1.1" 200 190 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36" "-"
{
"referrer" => "\"GET / HTTP/1.1\"",
"bytes" => "190",
"client_ip" => "112.195.209.90",
"@timestamp" => 2018-02-20T04:12:14.000Z,
"browser_info" => "Mozilla/5.0",
"extra_info" => "(Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36",
"status" => "200",
"host" => "ip-172-31-22-29.ec2.internal",
"@version" => "1",
"geoip" => {
"city_name" => "Chengdu",
"region_name" => "Sichuan",
"country_name" => "China",
"ip" => "112.195.209.90"
}
}
上面紅色字體的是我們手動輸入進去的內容,下面金色字體是系統反饋給我們的信息。
通過信息我們可以查看信息已經過濾成功了。非常好。
注意:有一點需要注意:在匹配信息的時候,GREEDYDATA與DATA匹配的機制是不一樣的,GREEDYDATA是貪婪模式,而DATA則是能少匹配一點就少匹配一點。通過上面的例子大家再體會一下。
最后給大家提供一個可以快速調試grok正則表達式的網站:https://www.5axxw.com/tools/v2/grok.html 。目的就是幫助萬魔門編寫grok正則匹配組合語句。