ELK Stack
- Elasticsearch:分布式搜索和分析引擎,具有高可伸縮、高可靠和易管理等特點。基於 Apache Lucene 構建,能對大容量的數據進行接近實時的存儲、搜索和分析操作。通常被用作某些應用的基礎搜索引擎,使其具有復雜的搜索功能;
- Logstash:數據收集引擎。它支持動態的從各種數據源搜集數據,並對數據進行過濾、分析、豐富、統一格式等操作,然后存儲到用戶指定的位置;
- Kibana:數據分析和可視化平台。通常與 Elasticsearch 配合使用,對其中數據進行搜索、分析和以統計圖表的方式展示;
- Filebeat:ELK 協議棧的新成員,一個輕量級開源日志文件數據搜集器,基於 Logstash-Forwarder 源代碼開發,是對它的替代。在需要采集日志數據的 server 上安裝 Filebeat,並指定日志目錄或日志文件后,Filebeat 就能讀取數據,迅速發送到 Logstash 進行解析,亦或直接發送到 Elasticsearch 進行集中式存儲和分析。
目前成熟架構(億級):
Filebeat * n + redis + logstash + elasticsearch + kibana
中小型(本文部署):
Filebeat*n +logstash + elasticsearch + kibana
Docker 部署Filebeat
docker-compose.yml
version: '3'
services:
filebeat:
build:
context: .
args:
ELK_VERSION: 7.1.1
user: root
container_name: 'filebeat'
volumes:
- ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro # 配置文件,只讀
- /var/lib/docker/containers:/var/lib/docker/containers:ro # 采集docker日志數據
- /var/run/docker.sock:/var/run/docker.sock:ro
Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/beats/filebeat:${ELK_VERSION}
config/filebeat.yml
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
filebeat.inputs:
- type: json-file
paths:
- /var/lib/docker/containers/*/*.log
output.logstash:
hosts: ["192.168.31.45:5000"] # 此處修改為logstash監聽地址
但這里我們發現,日志文件的命名方式是使用containerId來命名的,因此無法區分日志的容器所對應的鏡像,因此我們需要在各容器docker-compose文件添加labels信息.
version: "3"
services:
nginx:
image: nginx
container_name: nginx
labels:
service: nginx
ports:
- 80:80
logging:
driver: json-file
options:
labels: "service"
日志輸出如下,即可區分不同的容器創建不同的es庫
{"log":"172.18.0.1 - - [05/Jul/2019:06:33:55 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","attrs":{"service":"nginx"},"time":"2019-07-05T06:33:55.973727477Z"}
Docker部署ELK
說實話,貼代碼配置其實看上去會挺繁瑣的,也不可能每個配置項都講解到,看不懂的配置項或者docker-compose知識點,需要自己去惡補。
讓我們先來看一下文件目錄結構
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ └── Dockerfile
├── kibana
│ ├── config
│ │ └── kibana.yml
│ └── Dockerfile
└── logstash
├── config
│ └── logstash.yml
├── Dockerfile
└── pipeline
└── logstash.conf
docker-compose.yml
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: 7.1.1
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: 7.1.1
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: 7.1.1
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
elasticsearch
elasticsearch/config/elasticsearch.yml
cluster.name: docker-cluster
node.name: master
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: 192.168.31.45 # 這里是我內網ip
cluster.initial_master_nodes:
- master
http.cors.enabled: true
http.cors.allow-origin: "*"
elasticsearch/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
kibana
kibana/config/kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
kibana/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
logstash
logstash/config/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
logstash/pipeline/logstash.conf
input {
beats {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
logstash/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
至此部署成功,需要修改的地方有elasticsearch.yml的內網ip,logstash.conf新增filter