https://www.cnblogs.com/Dev0ps/p/11465673.html
系統架構圖:
1) 多個Filebeat在各個Node進行日志采集,然后上傳至Logstash
2) 多個Logstash節點並行(負載均衡,不作為集群),對日志記錄進行過濾處理,然后上傳至Elasticsearch集群
3) 多個Elasticsearch構成集群服務,提供日志的索引和存儲能力
4) Kibana負責對Elasticsearch中的日志數據進行檢索、分析
1|11. Elasticsearch部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/elasticsearch
創建logs命名空間
1
|
kubectl create ns logs
|
添加elastic helm charts 倉庫
1
|
helm repo add elastic https:
//helm
.elastic.co
|
安裝
1
|
helm
install
--name elasticsearch elastic
/elasticsearch
--namespace logs
|
參數說明
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
image:
"docker.elastic.co/elasticsearch/elasticsearch"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
podAnnotations: {}
esJavaOpts:
"-Xmx1g -Xms1g"
resources:
requests:
cpu:
"100m"
memory:
"2Gi"
limits:
cpu:
"1000m"
memory:
"2Gi"
volumeClaimTemplate:
accessModes: [
"ReadWriteOnce"
]
storageClassName:
"nfs-client"
resources:
requests:
storage: 50Gi
|
1|22. Filebeat部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/filebeat
Add the elastic helm charts repo
1
|
helm repo add elastic https:
//helm
.elastic.co
|
Install it
1
|
helm
install
--name filebeat elastic
/filebeat
--namespace logs
|
參數說明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
image:
"docker.elastic.co/beats/filebeat"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
resources:
requests:
cpu:
"100m"
memory:
"100Mi"
limits:
cpu:
"1000m"
memory:
"200Mi"
|
那么問題來了,filebeat默認收集宿主機上docker的日志路徑:/var/lib/docker/containers。如果我們修改了docker的安裝路徑要怎么收集呢,很簡單修改chart里的DaemonSet文件里邊的hostPath參數:
1
2
3
|
- name: varlibdockercontainers
hostPath:
path:
/var/lib/docker/containers
#改為docker安裝路徑
|
對java程序的報錯異常log實現多行合並,用multiline定義正則來匹配。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
-
type
: docker
containers.ids:
-
'*'
multiline.pattern:
'^[0-9]'
multiline.negate:
true
multiline.match: after
processors:
- add_kubernetes_metadata:
in_cluster:
true
output.elasticsearch:
hosts:
'${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
|
1|33. Kibana部署
官方chart地址:https://github.com/elastic/helm-charts/tree/master/kibana
Add the elastic helm charts repo
1
|
helm repo add elastic https:
//helm
.elastic.co
|
Install it
1
|
helm
install
--name kibana elastic
/kibana
--namespace logs
|
參數說明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
elasticsearchHosts:
"http://elasticsearch-master:9200"
replicas: 1
image:
"docker.elastic.co/kibana/kibana"
imageTag:
"7.2.0"
imagePullPolicy:
"IfNotPresent"
resources:
requests:
cpu:
"100m"
memory:
"500m"
limits:
cpu:
"1000m"
memory:
"1Gi"
|
1|44. Logstash部署
官方chart地址:https://github.com/helm/charts/tree/master/stable/logstash
安裝
1
|
$ helm
install
--name logstash stable
/logstash
--namespace logs
|
參數說明:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
image:
repository: docker.elastic.co
/logstash/logstash-oss
tag: 7.2.0
pullPolicy: IfNotPresent
persistence:
enabled:
true
storageClass:
"nfs-client"
accessMode: ReadWriteOnce
size: 2Gi
|
匹配label:json的pod日志,沒有的話正常收集。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
filebeatConfig:
filebeat.yml: |
filebeat.autodiscover:
providers:
-
type
: kubernetes
templates:
- condition:
equals:
kubernetes.labels.logFormat:
"json"
config:
-
type
: docker
containers.ids:
-
"${data.kubernetes.container.id}"
json.keys_under_root:
true
json.overwrite_keys:
true
json.add_error_key:
true
- config:
-
type
: docker
containers.ids:
-
"${data.kubernetes.container.id}"
processors:
- add_kubernetes_metadata:
in_cluster:
true
output.elasticsearch:
hosts:
'${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
|
1|55. Elastalert部署
官方chart地址:https://github.com/helm/charts/tree/master/stable/elastalert
安裝
1
|
helm
install
-n elastalert .
/elastalert
--namespace logs
|
效果圖: