使用Docker及k8s啟動logstash服務


  前提:搭建好elasticsearch和kibana服務

  下載鏡像,需要下載與elasticsearch和kibana相同版本鏡像

docker pull docker.elastic.co/logstash/logstash:6.6.2

   編寫收集日志配置文件

# cat /etc/logstash/conf.d/logstash.conf 
input{
    stdin{}
}
 
filter{
}
 
output{
    elasticsearch{
    hosts => ["172.16.90.24:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
    }
    stdout{
    codec => rubydebug
    }
}

   標准輸入輸出至elasticsearch和標准輸出

  編寫logstash配置文件

# cat /etc/logstash/conf.d/logstash.yml 
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://172.16.90.24:9200

   使用docker啟動logstash

docker run --rm -it -v /etc/logstash/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /etc/logstash/conf.d/logstash.yml:/usr/share/logstash/config/logstash.yml docker.elastic.co/logstash/logstash:6.6.2

   參數解析

docker run #運行
--rm #退出刪除
-it  #后台運行
-v /etc/logstash/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf#掛載日志收集配置文件 
-v /etc/logstash/conf.d/logstash.yml:/usr/share/logstash/config/logstash.yml#掛載logstash配置文件默認主機是elasticsearch
 docker.elastic.co/logstash/logstash:6.6.2#使用的鏡像

   標准輸入所以可以在屏幕輸入

 

   登錄kibana添加日志也可以查看到相同內容

 

   使用logstash啟動5044端口收集日志

  logstash配置文件

input{
   beats{
       port => 5044
   }
}

filter{
}

output{
    elasticsearch{
    hosts => ["172.16.90.24:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
    }
    stdout{
    codec => rubydebug
    }
}

   啟動5044端口把日志輸出至elasticsearch和標准屏幕輸出

  啟動

docker run --rm -it -v /etc/logstash/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /etc/logstash/conf.d/logstash.yml:/usr/share/logstash/config/logstash.yml -p 5044:5044 docker.elastic.co/logstash/logstash:6.6.2

   在主機暴露5044端口

  安裝filebeat並配置文件用於收集日志

# sed '/#/d' /etc/filebeat/filebeat.yml |sed '/^$/d'
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: ["192.168.1.11:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

   啟動filebeat

systemctl start filebeat

   但此時logstash寫入elasticsearch會報錯:failed to parse field [host] of type [text] in document with id 'E0lsjW4BTdp_eLcgfhbu'看elasticsearch日志發現此時host為一個json對象,需要變為字符串才行

 

   修改配置,添加過濾器,把host.name賦值為host

input{
   beats{
       port => 5044
   }
   #stdin {}
}

filter{
   mutate {
    rename => { "[host][name]" => "host" }
  }
}

output{
    elasticsearch{
    hosts => ["172.16.90.24:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
    }
    stdout{
    codec => rubydebug
    }
}

   再次使用docker啟動logstash

docker run --rm -it -v /etc/logstash/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf -v /etc/logstash/conf.d/logstash.yml:/usr/share/logstash/config/logstash.yml -p 5044:5044 docker.elastic.co/logstash/logstash:6.6.2

   輸出正常了 host字段也變成了字符串而不是json,可以正常輸入至logstash

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM