ELK之十----logstash結合filebeat將日志存儲到redis,再由logstash轉存到elasticsearch


實戰一:filebeat收集日志到redis再由logstash轉存到elasticsearch主機

框架圖:

環境准備:

A主機:elasticsearch/kibana   IP地址:192.168.7.100

B主機:logstash                      IP地址:192.168.7.102

C主機:filebeat/nginx             IP地址:192.168.7.103

D主機: redis                          IP地址: 192.168.7.104

1、filebeat收集系統和nginx日志到redis主機

1.1、安裝redis服務,並修改配置

1、安裝redis服務

# yum install redis  -y

2、修改redis配置文件,修改監聽地址和密碼

[root@web1 ~]# vim /etc/redis.conf 
bind 0.0.0.0
requirepasswd 123456

3、啟動redis服務

# systemctl start redis

1.2、修改filebeat主機配置,將日志存到redis服務器上

1、修改filebeat主機配置文件,將日志存儲到redis服務器上

[root@filebate tmp]# vim /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    host: "192.168.7.103"
    type: "filebeat-syslog-7-103"
    app: "syslog"
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  fields:
    host: "192.168.7.103"
    type: "filebeat-nginx-accesslog-7-103"
    app: "nginx"

output.redis: 
  hosts: ["192.168.7.104"] # 寫入redis服務器主機IP地址
  port: 6379 # redis監聽的端口號
  password: "123456" # redis密碼
  key: "filebeat-log-7-103"  # 自定義的key
  db: 0  # 選擇默認的數據庫
  timeout: 5  #超時時長,可以修改再大點

2、查看filebeat的關鍵信息

[root@filebate tmp]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$" 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    host: "192.168.7.103"
    type: "filebeat-syslog-7-103"
    app: "syslog"
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  fields:
    host: "192.168.7.103"
    type: "filebeat-nginx-accesslog-7-103"
    app: "nginx"
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
output.redis: 
  hosts: ["192.168.7.104"]
  port: 6379
  password: "123456"
  key: "filebeat-log-7-103"
  db: 0
  timeout: 5

3、啟動filebeat服務

# systemctl restart filebeat

2、在redis主機上測試驗證數據

1、登陸redis客戶端查看數據,此時可以看到對應的key值已經到達,說明數據可以到達redis服務器。

[root@web1 ~]# redis-cli -h 192.168.7.104
192.168.7.104:6379> auth 123456
OK
192.168.7.104:6379> KEYS *
1) "filebeat-log-7-103"
192.168.7.104:6379> 

3、在logstash收集redis服務器的日志

1、修改logstash配置文件,收集redis日志

[root@logstash conf.d]# vim logstash-to-es.conf 
input {
   redis {
     host => "192.168.7.104"  # redis主機的IP地址
     port => "6379" # 端口
     db => "0"  # 與filebeat對應的數據庫
     password => "123456" #密碼
     data_type => "list"  # 日志類型
     key => "filebeat-log-7-103" # 與filebeat對應的key值
     codec => "json"
   }
}


output {
  if [fields][app] == "syslog" {  # 與filebeat主機的app類型一致
    elasticsearch {
      hosts => ["192.168.7.100:9200"] # 日志轉到elasticsearch主機上
      index => "logstash-syslog-7-103-%{+YYYY.MM.dd}"
    }}

  if [fields][app] == "nginx" { # 與filebeat主機的app類型一致
    elasticsearch {
      hosts => ["192.168.7.100:9200"]
      index => "logstash-nginx-accesslog-7-103-%{+YYYY.MM.dd}"
    }}
}

檢查語法是否存在問題,如果不存在問題就啟動服務

[root@logstash conf.d]# logstash -f  logstash-to-es.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 10:05:05.487 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK  # 檢查語法正確
[INFO ] 2020-03-16 10:05:16.597 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

2、重啟logstash服務器

# systemctl restart logstash

3、查看head插件收集的日志名稱,此時就可以看到日志提取到的信息

4、在kibana網頁上創建索引

1、在kibana網頁上創建Nginx日志索引,同理系統日志也可以這樣創建

 2、在discover查看提取到的nginx日志數據

3、查看收集到的系統日志

實戰二:logstash結合filebeat收集到redis日志,並轉存到elasticsearch主機

框架圖:

環境准備:

這里沒有太多測試主機,都是以單機形式測試,生產環境可以按上面的部署

A主機:elasticsearch/kibana   IP地址:192.168.7.100

B主機:logstash-A                      IP地址:192.168.7.102

C主機:filebeat/nginx             IP地址:192.168.7.103

D主機: redis                          IP地址: 192.168.7.104

E主機: logstash-B                 IP地址:192.168.7.101

1、安裝並配置filebeat主機

1、安裝filebeat包,這里需要在官網上下載包

[root@filebeat-1 ~]# yum install filebeat-6.8.1-x86_64.rpm -y

2、修改filebeat配置文件,將日志由filebeat傳遞到第一個logstash主機上,如果有多個filebeat對多個logstash主機進行轉存日志,可以在output.logstash配置段,寫入不同的logstash主機的IP地址

[root@filebate ~]# grep -v "#" /etc/filebeat/filebeat.yml  | grep -v "^$"
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    host: "192.168.7.103"
    type: "filebeat-syslog-7-103"
    app: "syslog"
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
  fields:
    host: "192.168.7.103" # 指定本機的IP地址
    type: "filebeat-nginx-accesslog-7-103"
    app: "nginx"
output.logstash: 
  hosts: ["192.168.7.101:5044"] # 寫到指定的logstash服務器上,如果有多個filebeat主機傳遞到不同logstash主機時,可以在另一個filebeat主機上寫上另一個logstash主機的IP地址
  enabled: true # 是否傳遞到logstash服務器,默認是開啟
  work: 1 # 工作線程
  compression_level: 3  # 壓縮等級

3、重啟filebeat服務

# systemctl restart filebeat

2、修改logstash-B主機,將日志存儲到redis服務器上  

1、在/etc/logstash/conf.d/目錄下創建一個存儲到redis日志的配置文件,如果有多個filebeat、logstash和redis,可以分別對redis主機進行存儲日志,減少logstash壓力

[root@logstash-1 conf.d]# cat  filebeat-to-logstash.conf 
input {
  beats {
    host => "192.168.7.101" # logstash主機的IP地址,如果還有其他logstash主機轉存到redis主機上,可以在另一台logstash主機上寫入對應本機的IP地址,分擔logstash主機的壓力
    port => 5044  # 端口號
    codec => "json"
  }
}


output {
  if [fields][app] == "syslog" {
  redis {
       host => "192.168.7.104" #  存儲到redis服務器地址
       port => "6379"
       db => "0"
       data_type => "list"
       password => "123456"
       key =>  "filebeat-syslog-7-103"  #定義不同日志的key,方便區分
       codec => "json"
  }}

  if [fields][app] == "nginx" {
  redis {
       host => "192.168.7.104"
       port => "6379"
       db => "0"
       data_type => "list"
       password => "123456"
       key =>  "filebeat-nginx-log-7-103" # 定義不同的key,方便分析
       codec => "json"
  }
}
}

2、對logstash主機進行測試

[root@logstash-1 conf.d]# logstash -f filebeat-to-logstash.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 11:23:31.687 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK  # 測試配置文件正常

重新啟動logstash服務

# systemctl  restart logstash

3、此時在redis主機上可以看到兩個key值,說明logstash主機已經將日志存到redis主機上

[root@web1 ~]# redis-cli -h 192.168.7.104
192.168.7.104:6379> auth 123456
OK
192.168.7.104:6379> KEYS *
1) "filebeat-nginx-log-7-103"
2) "filebeat-syslog-7-103"

3、在logstash-A主機上配置提取redis的日志並轉存到elasticsearch主機上

1、在logstash主機的/etc/logstash/conf.d目錄下創建一個提取redis主機的日志

[root@logstash conf.d]# cat  logstash-to-es.conf 
input {
   redis {
     host => "192.168.7.104" # redis主機的IP地址
     port => "6379"
     db => "0"
     password => "123456"
     data_type => "list"
     key => "filebeat-syslog-7-103" # 寫入對應的filebeat的key值
     codec => "json"
   }
   redis {
     host => "192.168.7.104" # redis主機的IP地址
     port => "6379"
     db => "0"
     password => "123456"
     data_type => "list"
     key => "filebeat-nginx-log-7-103"  # 針對filebeat寫入的key值,
     codec => "json"
   }
}


output {
  if [fields][app] == "syslog" {  # 對應filebeat主機的app類型
    elasticsearch {
      hosts => ["192.168.7.100:9200"] # elasticsearch主機IP地址
      index => "logstash-syslog-7-103-%{+YYYY.MM.dd}"
    }}

  if [fields][app] == "nginx" {  # 對應filebeat主機的app類型
    elasticsearch {
      hosts => ["192.168.7.100:9200"]
      index => "logstash-nginx-accesslog-7-103-%{+YYYY.MM.dd}"
    }}
}

2、測試logstash主機的配置文件

[root@logstash conf.d]# logstash -f logstash-to-es.conf  -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-03-16 11:31:30.943 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK

3、重啟logstash主機服務

# systemctl restart logstash

4、在head插件查看獲取到的系統日志和nginx日志

4、在kibana創建索引,查看收集到的日志信息

1、創建Nginx索引,系統日志索引同理

2、查看創建的索引信息

 

 

 

 

  

 

  

 

  

  

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM