2.3.1: 將采集數據標准輸出到控制台
配置示例:
output {
stdout {
codec => rubydebug
}
}
Codec 來自 Coder/decoder
兩個單詞的首字母縮寫,Logstash 不只是一個input | filter | output 的數據流,
而是一個input | decode | filter | encode | output 的數據流,codec 就是用來decode、encode 事件的。
簡單說,就是在logstash讀入的時候,通過codec編碼解析日志為相應格式,從logstash輸出的時候,通過codec解碼成相應格式。
演示:
input {stdin{}}
output {
stdout {
codec => rubydebug
}
}
啟動:bin/logstash -f /usr/local/elk/logstash-5.5.2/conf/template/stdout.conf
展示:
通過日志收集系統將分散在數百台服務器上的數據集中存儲在某中心服務器上,這是運維最原始的需求;
需求:將數據采集到logstash的日志文件中,區分業務和采集日期(哪天采集的)
input {stdin{}}
output {
file {
path => "/home/angel/logstash-5.5.2/logs/stdout/mobile-collection/%{+YYYY-MM-dd}-%{host}.txt"
codec => line {
format => "%{message}"
}
gzip => true
}
}
啟動:
bin/logstash -f /home/angel/servers/logstash-5.5.2/logstash_conf/stdout_file.conf
2.3.3:將采集數據保存到elasticsearch
Logstash可以直接將采集到的信息下沉到elasticsearch中
input {stdin{}}
output {
elasticsearch {
hosts => ["hadoop01:9200"]
index => "logstash-%{+YYYY.MM.dd}" #這個index是保存到elasticsearch上的索引名稱,如何命名特別重要,因為我們很可能后續根據某些需求做查詢,所以最好帶時間,因為我們在中間加上type,就代表不同的業務,這樣我們在查詢當天數據的時候,就可以根據類型+時間做范圍查詢
flush_size => 20000 #表示logstash的包數量達到20000個才批量提交到es.默認是500
idle_flush_time => 10 #多長時間發送一次數據,flush_size和idle_flush_time以定時定量的方式發送,按照批次發送,可以減少logstash的網絡IO請求
user => elastic
password => changeme
}
}
啟動:bin/logstash -f /usr/local/elk/logstash-5.5.2/conf/template/stdout_es.conf
向控制台中輸入6條數據:
192.168.77.1 - - [10/Apr/2018:00:44:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 505 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.2 - - [10/Apr/2018:00:45:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 460 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.3 - - [10/Apr/2018:00:46:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 510 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.4 - - [10/Apr/2018:00:47:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 112 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.5 - - [10/Apr/2018:00:48:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 455 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.6 - - [10/Apr/2018:00:49:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 653 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
2.3.4:將采集的數據保存到redis
配置:
input { stdin {} }
output {
redis {
host => "hadoop01"
data_type => "list"
db => 2
port => "6379"
key => "logstash-chan-%{+yyyy.MM.dd}"
}
}
數據落地到redis的優化:
• 批處理類(僅用於data_type為list)
o batch:設為true,通過發送一條rpush命令,存儲一批的數據
o 默認為false:1條rpush命令,存儲1條數據
o 設為true之后,1條rpush會發送batch_events條數據
o batch_events:一次rpush多少條
o 默認50條
o batch_timeout:一次rpush最多消耗多少s
o 默認5s
• 擁塞保護(僅用於data_type為list)
o congestion_interval:每隔多長時間進行一次擁塞檢查
o 默認1s
o 設為0,表示對每rpush一個,都進行檢測
o congestion_threshold:list中最多可以存在多少個item數據
o 默認是0:表示禁用擁塞檢測
o 當list中的數據量達到congestion_threshold,會阻塞直到有其他消費者消費list中的數據
o 作用:防止OOM
啟動redis 將數據打入logstash控制台:
192.168.77.1 - - [10/Apr/2018:00:44:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 505 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.2 - - [10/Apr/2018:00:45:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 460 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.3 - - [10/Apr/2018:00:46:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 510 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.4 - - [10/Apr/2018:00:47:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 112 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.5 - - [10/Apr/2018:00:48:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 455 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.6 - - [10/Apr/2018:00:49:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 653 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
去redis上做認證,查看是否已經存儲redis中: