ELK 安裝配置簡單,用於管理 OpenStack 日志時需注意兩點:
- Logstash 配置文件的編寫
- Elasticsearch 日志存儲空間的容量規划
另外推薦 ELKstack 中文指南。
ELK 簡介
ELK 是一套優秀的日志搜集、存儲和查詢的開源軟件,廣泛用於日志系統。當 OpenStack 集群達到一定規模時,日志管理和分析顯得日益重要,良好統一的日志管理和分析平台有助於快速定位問題。Mirantis 的 fuel 和 HPE 的 helion 均集成了 ELK。
- Logstash:日志的收集和傳送
- Elasticsearch:日志的存儲和檢索
- Kibana:日志的可視化,一個很炫的 portal
采用 ELK 管理 OpenStack 的日志具有以下優點:
- 迅速查詢全局 ERROR 級別日志
- 某個 API 的請求頻率
- 通過 request ID, 可過濾出某個 API 整個流程的日志
規划與設計
部署架構
控制節點作為日志服務器,存儲所有 OpenStack 及其相關日志。Logstash 部署於所有節點,收集本節點下所需收集的日志,然后以網絡(node/http)方式輸送給控制節點的 Elasticsearch,Kibana 作為 web portal 提供展示日志信息:

日志格式
為了提供快速直觀的檢索功能,對於每一條 OpenStack 日志,我們希望它能包含以下屬性,用於檢索和過濾:
- Host: 如 controller01,compute01 等
- Service Name: 如 nova-api, neutron-server 等
- Module: 如 nova.filters
- Log Level: 如 DEBUG, INFO, ERROR 等
- Log date
- Request ID: 某次請求的 Request ID
以上屬性可以通過 Logstash 實現,通過提取日志的關鍵字段,從而獲上述幾種屬性,並在 Elasticsearch 建立索引。
安裝與配置
安裝
ELK 的安裝步驟非常簡單,可參考 logstash-es-Kibana 安裝,如遇異常,請 Google。
配置
Logstash 的配置文件有專門的一套語法,學習的成本比較高,可參考 openstack logstash config 后,再根據自身需求改寫:
input {
file {
path => ['/var/log/nova/nova-api.log']
tags => ['nova', 'oslofmt']
type => "nova-api"
}
file {
path => ['/var/log/nova/nova-conductor.log']
tags => ['nova-conductor', 'oslofmt']
type => "nova"
}
file {
path => ['/var/log/nova/nova-manage.log']
tags => ['nova-manage', 'oslofmt']
type => "nova"
}
file {
path => ['/var/log/nova/nova-scheduler.log']
tags => ['nova-scheduler', 'oslofmt']
type => "nova"
}
file {
path => ['/var/log/nova/nova-spicehtml5proxy.log']
tags => ['nova-spice', 'oslofmt']
type => "nova"
}
file {
path => ['/var/log/keystone/keystone-all.log']
tags => ['keystone', 'keystonefmt']
type => "keystone"
}
file {
path => ['/var/log/keystone/keystone-manage.log']
tags => ['keystone', 'keystonefmt']
type => "keystone"
}
file {
path => ['/var/log/glance/api.log']
tags => ['glance', 'oslofmt']
type => "glance-api"
}
file {
path => ['/var/log/glance/registry.log']
tags => ['glance', 'oslofmt']
type => "glance-registry"
}
file {
path => ['/var/log/glance/scrubber.log']
tags => ['glance', 'oslofmt']
type => "glance-scrubber"
}
file {
path => ['/var/log/ceilometer/ceilometer-agent-central.log']
tags => ['ceilometer', 'oslofmt']
type => "ceilometer-agent-central"
}
file {
path => ['/var/log/ceilometer/ceilometer-alarm-notifier.log']
tags => ['ceilometer', 'oslofmt']
type => "ceilometer-alarm-notifier"
}
file {
path => ['/var/log/ceilometer/ceilometer-api.log']
tags => ['ceilometer', 'oslofmt']
type => "ceilometer-api"
}
file {
path => ['/var/log/ceilometer/ceilometer-alarm-evaluator.log']
tags => ['ceilometer', 'oslofmt']
type => "ceilometer-alarm-evaluator"
}
file {
path => ['/var/log/ceilometer/ceilometer-collector.log']
tags => ['ceilometer', 'oslofmt']
type => "ceilometer-collector"
}
file {
path => ['/var/log/heat/heat.log']
tags => ['heat', 'oslofmt']
type => "heat"
}
file {
path => ['/var/log/neutron/neutron-server.log']
tags => ['neutron', 'oslofmt']
type => "neutron-server"
}
# Not collecting RabbitMQ logs for the moment
# file {
# path => ['/var/log/rabbitmq/rabbit@<%= @hostname %>.log']
# tags => ['rabbitmq', 'oslofmt']
# type => "rabbitmq"
# }
file {
path => ['/var/log/httpd/access_log']
tags => ['horizon']
type => "horizon"
}
file {
path => ['/var/log/httpd/error_log']
tags => ['horizon']
type => "horizon"
}
file {
path => ['/var/log/httpd/horizon_access_log']
tags => ['horizon']
type => "horizon"
}
file {
path => ['/var/log/httpd/horizon_error_log']
tags => ['horizon']
type => "horizon"
}
}
filter {
if "oslofmt" in [tags] {
multiline {
negate => true
pattern => "^%{TIMESTAMP_ISO8601} "
what => "previous"
}
multiline {
negate => false
pattern => "^%{TIMESTAMP_ISO8601}%{SPACE}%{NUMBER}?%{SPACE}?TRACE"
what => "previous"
}
grok {
# Do multiline matching as the above mutliline filter may add newlines
# to the log messages.
# TODO move the LOGLEVELs into a proper grok pattern.
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}%{SPACE}%{NUMBER:pid}?%{SPACE}?(?<loglevel>AUDIT|CRITICAL|DEBUG|INFO|TRACE|WARNING|ERROR) \[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
add_field => { "received_at" => "%{@timestamp}" }
}
} else if "keystonefmt" in [tags] {
grok {
# Do multiline matching as the above mutliline filter may add newlines
# to the log messages.
# TODO move the LOGLEVELs into a proper grok pattern.
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}%{SPACE}%{NUMBER:pid}?%{SPACE}?(?<loglevel>AUDIT|CRITICAL|DEBUG|INFO|TRACE|WARNING|ERROR) \[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
add_field => { "received_at" => "%{@timestamp}" }
}
if [module] == "iso8601.iso8601" {
#log message for each part of the date? Really?
drop {}
}
} else if "libvirt" in [tags] {
grok {
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:logdate}:%{SPACE}%{NUMBER:code}:?%{SPACE}\[?\b%{NOTSPACE:loglevel}\b\]?%{SPACE}?:?%{SPACE}\[?\b%{NOTSPACE:module}\b\]?%{SPACE}?%{GREEDYDATA:logmessage}?" }
add_field => { "received_at" => "%{@timestamp}"}
}
mutate {
uppercase => [ "loglevel" ]
}
} else if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:logmessage}" }
add_field => [ "received_at", "%{@timestamp}" ]
}
syslog_pri {
severity_labels => ["ERROR", "ERROR", "ERROR", "ERROR", "WARNING", "INFO", "INFO", "DEBUG" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_timestamp" ]
add_field => [ "loglevel", "%{syslog_severity}" ]
add_field => [ "module", "%{syslog_program}" ]
}
}
}
output {
elasticsearch { host => controller }
}
openstack.org用來觀察infra
http://logstash.openstack.org/#/dashboard/file/logstash.json
http://docs.openstack.org/infra/system-config/logstash.html
把ceilometer的數據發送到ELK
http://www.brownhat.org/wp/2014/09/18/openstack-logstash-and-elasticsearch-living-on-the-edge/
