https://www.cnblogs.com/Dev0ps/p/11465673.html
系统架构图:
1) 多个Filebeat在各个Node进行日志采集,然后上传至Logstash
2) 多个Logstash节点并行(负载均衡,不作为集群),对日志记录进行过滤处理,然后上传至Elasticsearch集群
3) 多个Elasticsearch构成集群服务,提供日志的索引和存储能力
4) Kibana负责对Elasticsearch中的日志数据进行检索、分析
1|11. Elasticsearch部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/elasticsearch
创建logs命名空间
1
|
kubectl create ns logs
|
添加elastic helm charts 仓库
1
|
helm repo add elastic https:
//helm
.elastic.co
|
安装
1
|
helm
install
--name elasticsearch elastic
/elasticsearch
--namespace logs
|
参数说明
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
image:
"docker.elastic.co/elasticsearch/elasticsearch"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
podAnnotations: {}
esJavaOpts:
"-Xmx1g -Xms1g"
resources:
requests:
cpu:
"100m"
memory:
"2Gi"
limits:
cpu:
"1000m"
memory:
"2Gi"
volumeClaimTemplate:
accessModes: [
"ReadWriteOnce"
]
storageClassName:
"nfs-client"
resources:
requests:
storage: 50Gi
|
1|22. Filebeat部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/filebeat
Add the elastic helm charts repo
1
|
helm repo add elastic https:
//helm
.elastic.co
|
Install it
1
|
helm
install
--name filebeat elastic
/filebeat
--namespace logs
|
参数说明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
image:
"docker.elastic.co/beats/filebeat"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
resources:
requests:
cpu:
"100m"
memory:
"100Mi"
limits:
cpu:
"1000m"
memory:
"200Mi"
|
那么问题来了,filebeat默认收集宿主机上docker的日志路径:/var/lib/docker/containers。如果我们修改了docker的安装路径要怎么收集呢,很简单修改chart里的DaemonSet文件里边的hostPath参数:
1
2
3
|
- name: varlibdockercontainers
hostPath:
path:
/var/lib/docker/containers
#改为docker安装路径
|
对java程序的报错异常log实现多行合并,用multiline定义正则来匹配。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
-
type
: docker
containers.ids:
-
'*'
multiline.pattern:
'^[0-9]'
multiline.negate:
true
multiline.match: after
processors:
- add_kubernetes_metadata:
in_cluster:
true
output.elasticsearch:
hosts:
'${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
|
1|33. Kibana部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/kibana
Add the elastic helm charts repo
1
|
helm repo add elastic https:
//helm
.elastic.co
|
Install it
1
|
helm
install
--name kibana elastic
/kibana
--namespace logs
|
参数说明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
elasticsearchHosts:
"http://elasticsearch-master:9200"
replicas: 1
image:
"docker.elastic.co/kibana/kibana"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
resources:
requests:
cpu:
"100m"
memory:
"500m"
limits:
cpu:
"1000m"
memory:
"1Gi"
|
1|44. Logstash部署
官方chart地址:https://github.com/helm/charts/tree/master/stable/logstash
安装
1
|
$ helm
install
--name logstash stable
/logstash
--namespace logs
|
参数说明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
image:
repository: docker.elastic.co
/logstash/logstash-oss
tag: 7.2.0
pullPolicy: IfNotPresent
persistence:
enabled:
true
storageClass:
"nfs-client"
accessMode: ReadWriteOnce
size: 2Gi
|
匹配label:json的pod日志,没有的话正常收集。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
filebeatConfig:
filebeat.yml: |
filebeat.autodiscover:
providers:
-
type
: kubernetes
templates:
- condition:
equals:
kubernetes.labels.logFormat:
"json"
config:
-
type
: docker
containers.ids:
-
"${data.kubernetes.container.id}"
json.keys_under_root:
true
json.overwrite_keys:
true
json.add_error_key:
true
- config:
-
type
: docker
containers.ids:
-
"${data.kubernetes.container.id}"
processors:
- add_kubernetes_metadata:
in_cluster:
true
output.elasticsearch:
hosts:
'${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
|
1|55. Elastalert部署
官方chart地址:https://github.com/helm/charts/tree/master/stable/elastalert
安装
1
|
helm
install
-n elastalert .
/elastalert
--namespace logs
|
效果图: