Kubernetes部署ElasticSearch集群
1.前提准備工作
1.1 創建elastic的命名空間
namespace編排文件如下:
elastic.namespace.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: elastic
創建elastic名稱空間
$ kubectl apply elastic.namespace.yaml
namespace/elastic created
1.2 生成Xpack認證證書文件
ElasticSearch提供了生成證書的工具elasticsearch-certutil
,我們可以在docker實例中先生成它,然后復制出來,后面統一使用。
1.2.1 創建ES臨時容器
$ docker run -it -d --name elastic-cret docker.elastic.co/elasticsearch/elasticsearch:7.8.0 /bin/bash
62acfabc85f220941fcaf08bc783c4e305813045683290fe7b15f95e37e70cd0
1.2.2 進入容器生成密鑰文件
$ docker exec -it elastic-cret /bin/bash
$ ./bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA s private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
......
Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [elastic-certificates.p12]:
Enter password for elastic-certificates.p12 :
Certificates written to /usr/share/elasticsearch/elastic-certificates.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
$ ls *.p12
elastic-certificates.p12 elastic-stack-ca.p12
注:以上所有選項無需填寫,直接回車即可
1.2.3 將證書文件從容器內復制出來備用
$ docker cp elastic-cret:/usr/share/elasticsearch/elastic-certificates.p12 .
$ docker rm -f elastic-cret
2 創建Master節點
創建Master主節點用於控制整個集群,編排文件如下:
2.1 為Master節點配置數據持久化
# 創建編排文件
elasticsearch-master.pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-elasticsearch-master
namespace: elastic
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client # 此處指定StorageClass存儲卷
resources:
requests:
storage: 10Gi
# 創建pvc存儲卷
kubectl apply -f elasticsearch-master.pvc.yaml
kubectl get pvc -n elastic
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-elasticsearch-master Bound pvc-9ef037b7-c4b2-11ea-8237-ac1f6bd6d98e 10Gi RWX nfs-client-ssd 38d
將之前生成的證書文件存放到創建好pvc的crets目錄中,例:
$ mkdir ${MASTER-PVC_HOME}/crets
$ cp elastic-certificates.p12 ${MASTER-PVC_HOME}/crets/
創建 ELK 集群的管理員賬號密碼,用戶名使用 elastic
$ kubectl -n elastic create secret generic elastic-credentials --from-literal=username=elastic --from-literal=password=your-password
2.2 創建master節點ConfigMap編排文件
ConfigMap對象用於存放Master集群配置信息,方便ElasticSearch的配置並開啟Xpack認證功能,資源對象如下:
elasticsearch-master.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: elasticsearch-master-config
labels:
app: elasticsearch
role: master
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: true
data: false
ingest: false
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---
2.3 創建master節點Service編排文件
Master節點只需要用於集群通信的9300端口,資源清單如下:
elasticsearch-master.service.yaml
---
apiVersion: v1
kind: Service
metadata:
namespace: elastic
name: elasticsearch-master
labels:
app: elasticsearch
role: master
spec:
ports:
- port: 9300
name: transport
selector:
app: elasticsearch
role: master
---
2.4 創建master節點Deployment編排文件
Deployment用於定於Master節點應用Pod,資源清單如下:
elasticsearch-master.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: elastic
name: elasticsearch-master
labels:
app: elasticsearch
role: master
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
role: master
template:
metadata:
labels:
app: elasticsearch
role: master
spec:
containers:
- name: elasticsearch-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-master
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: ES_JAVA_OPTS
value: "-Xms2048m -Xmx2048m"
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: config
configMap:
name: elasticsearch-master-config
- name: storage
persistentVolumeClaim:
claimName: pvc-elasticsearch-master
---
2.5 創建3個master資源對象
$ kubectl apply -f elasticsearch-master.configmap.yaml \
-f elasticsearch-master.service.yaml \
-f elasticsearch-master.deployment.yaml
configmap/elasticsearch-master-config created
service/elasticsearch-master created
deployment.apps/elasticsearch-master created
$ kubectl get pods -n elastic -l app=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 23m
直到 Pod 變成 Running 狀態就表明 master 節點安裝成功。
3 安裝ElasticSearch數據節點
接下來安裝的是ES的數據節點,主要用於負責集群的數據托管和執行查詢
3.1 創建data節點ConfigMap編排文件
跟Master節點一樣,ConfigMap用於存放數據節點ES的配置信息,編排文件如下:
elasticsearch-data.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: elasticsearch-data-config
labels:
app: elasticsearch
role: data
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: false
data: true
ingest: false
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---
3.2 創建data節點Service編排文件
data節點同master一樣只需通過9300端口與其它節點通信,資源對象如下:
elasticsearch-data.service.yaml
---
apiVersion: v1
kind: Service
metadata:
namespace: elastic
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
selector:
app: elasticsearch
role: data
---
3.3 創建data節點StatefulSet控制器
data節點需要創建StatefulSet控制器,因為存在多個數據節點,且每個數據節點的數據不是一樣的,需要單獨存儲,其中volumeClaimTemplates用於定於每個數據節點的存儲卷,對應的清單文件如下:
elasticsearch-data.statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: elastic
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 2
selector:
matchLabels:
app: elasticsearch
role: data
template:
metadata:
labels:
app: elasticsearch
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms4096m -Xmx4096m"
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: config
configMap:
name: elasticsearch-data-config
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-client-ssd
resources:
requests:
storage: 500Gi
---
3.4 創建data節點資源對象
$ kubectl apply -f elasticsearch-data.configmap.yaml \
-f elasticsearch-data.service.yaml \
-f elasticsearch-data.statefulset.yaml
configmap/elasticsearch-data-config created
service/elasticsearch-data created
statefulset.apps/elasticsearch-data created
將之前准備好的ES證書文件同Master節點一樣復制到PVC的目錄中(每個數據節點都放一份)
$ mkdir ${DATA-PVC_HOME}/crets
$ cp elastic-certificates.p12 ${DATA-PVC_HOME}/crets/
等待Pod變成Running狀態說明節點啟動成功
$ kubectl get pods -n elastic -l app=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-data-0 1/1 Running 0 47m
elasticsearch-data-1 1/1 Running 0 47m
elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 100m
4 安裝ElasticSearch客戶端節點
Client節點主要用於負責暴露一個HTTP的接口用於查詢數據及將數據傳遞給數據節點
4.1 創建Client節點ConfigMap編排文件
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: elasticsearch-client-config
labels:
app: elasticsearch
role: client
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: false
data: false
ingest: true
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/data/certs/elastic-certificates.p12
---
4.2 創建Client節點Service編排文件
客戶端節點需要暴露兩個端口,9300端口用於與集群其它節點進行通信,9200端口用於HTTP API使用,資源對象如下:
elasticsearch-client.service.yaml
---
apiVersion: v1
kind: Service
metadata:
namespace: elastic
name: elasticsearch-client
labels:
app: elasticsearch
role: client
spec:
ports:
- port: 9200
name: client
nodePort: 9200
- port: 9300
name: transport
selector:
app: elasticsearch
role: client
type: NodePort
---
4.3 創建Client節點Deployment編排文件
elasticsearch-client.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: elastic
name: elasticsearch-client
labels:
app: elasticsearch
role: client
spec:
selector:
matchLabels:
app: elasticsearch
role: client
template:
metadata:
labels:
app: elasticsearch
role: client
spec:
containers:
- name: elasticsearch-client
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-client
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: ES_JAVA_OPTS
value: "-Xms2048m -Xmx2048m"
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: config
configMap:
name: elasticsearch-client-config
- name: storage
persistentVolumeClaim:
claimName: pvc-elasticsearch-client
---
4.4 創建Client節點資源對象
$ kubectl apply -f elasticsearch-client.configmap.yaml \
-f elasticsearch-client.service.yaml \
-f elasticsearch-client.deployment.yaml
configmap/elasticsearch-client-config created
service/elasticsearch-client created
deployment.apps/elasticsearch-client createdt
知道所有節點都部署成功為Running狀態說明安裝成功
kubectl get pods -n elastic -l app=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-client-f4d4ff794-6gxpz 1/1 Running 0 23m
elasticsearch-data-0 1/1 Running 0 47m
elasticsearch-data-1 1/1 Running 0 47m
elasticsearch-master-7fc5cc8957-jfjmr 1/1 Running 0 54m
部署Client過程中可使用如下命令查看集群狀態變化
$ kubectl logs -f -n elastic \
> $(kubectl get pods -n elastic | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \
> | grep "Cluster health status changed from"
{"type": "server", "timestamp": "2020-08-18T06:35:20,859Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]]]).", "cluster.uuid": "Yy1ctnq7SjmRsuYfbJGSzA", "node.id": "z7vrjgYcTUiiB7tb0kXQ1Q" }
5 生成初始化密碼
因為我們啟用了Xpack安全模塊來保護我們集群,所以需要一個初始化密碼,實用客戶端節點容器內的bin/elasticsearch-setup-passwords
命令來生成,如下所示
$ kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
-n elastic \
-- bin/elasticsearch-setup-passwords auto -b
Changed password for user apm_system
PASSWORD apm_system = 5wg8JbmKOKiLMNty90l1
Changed password for user kibana_system
PASSWORD kibana_system = 1bT0U5RbPX1e9zGNlWFL
Changed password for user kibana
PASSWORD kibana = 1bT0U5RbPX1e9zGNlWFL
Changed password for user logstash_system
PASSWORD logstash_system = 1ihEyA5yAPahNf9GuRJ9
Changed password for user beats_system
PASSWORD beats_system = WEWDpPndnGvgKY7ad0T9
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = MOCszTmzLmEXQrPIOW4T
Changed password for user elastic
PASSWORD elastic = bbkrgVrsE3UAfs2708aO
生成完后將elastic用戶名和密碼需要添加到Kubernetes的Secret對象中:
$ kubectl create secret generic elasticsearch-pw-elastic \
-n elastic \
--from-literal password=bbkrgVrsE3UAfs2708aO
6 創建Kibana應用
ElasticSearch集群安裝完后,需要安裝Kibana用於ElasticSearch數據的可視化工具。
6.1 創建Kibana的ConfigMap編排文件
創建一個ConfigMap資源對象用於Kibana的配置文件,里面定義了ElasticSearch的訪問地址、用戶及密碼信息,對應的清單文件如下:
kibana.configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: kibana-config
labels:
app: kibana
data:
kibana.yml: |-
server.host: 0.0.0.0
elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS}
username: ${ELASTICSEARCH_USER}
password: ${ELASTICSEARCH_PASSWORD}
---
6.2 創建Kibana的Service編排文件
kibana.service.yaml
---
apiVersion: v1
kind: Service
metadata:
namespace: elastic
name: kibana
labels:
app: kibana
spec:
ports:
- port: 5601
name: webinterface
nodePort: 5601
selector:
app: kibana
---
6.3 創建Kibana的Deployment編排文件
kibana.deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: elastic
name: kibana
labels:
app: kibana
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.8.0
ports:
- containerPort: 5601
name: webinterface
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
- name: ELASTICSEARCH_USER
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-pw-elastic
key: password
- name: "I18N_LOCALE"
value: "zh-CN"
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
readOnly: true
subPath: kibana.yml
volumes:
- name: config
configMap:
name: kibana-config
---
6.4 創建Kibana的Ingress編排文件
這里使用Ingress來暴露Kibana服務,用於通過域名訪問,編排文件如下:
kibana.ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
namespace: elastic
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: kibana.demo.com
http:
paths:
- backend:
serviceName: kibana
servicePort: 5601
path: /
6.5 通過Kibana編排文件創建資源對象
$ kubectl apply -f kibana.configmap.yaml \
-f kibana.service.yaml \
-f kibana.deployment.yaml \
-f kibana.ingress.yaml
configmap/kibana-config created
service/kibana created
deployment.apps/kibana created
ingress/kibana created
部署完成后通過查看Kibana日志查看啟動狀態:
kubectl logs -f -n elastic $(kubectl get pods -n elastic | grep kibana | sed -n 1p | awk '{print $1}') \
> | grep "Status changed from yellow to green"
{"type":"log","@timestamp":"2020-08-18T06:35:29Z","tags":["status","plugin:elasticsearch@7.8.0","info"],"pid":8,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
當狀態變成green后,我們就可以通過ingress的域名到瀏覽器訪問Kibana服務了
$ kubectl get ingress -n elastic
NAME HOSTS ADDRESS PORTS AGE
kibana kibana.demo.cn 80 40d
6.5 登入Kibana並配置
如圖所示,使用上面創建的Secret對象中的elastic用戶和生成的密碼進行登入:
創建一個超級用戶進行訪問,依次點擊 Stack Management > 用戶 > 創建用戶 > 輸入如下信息:
創建完成后就可以用自定義的admin用戶進行管理