k8s上安裝elasticsearch集群


官方文檔地址:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
yaml文件地址:https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml

前提要求

1.Kubernetes 1.16-1.20
2. es各組件版本

  • Elasticsearch, Kibana, APM Server: 6.8+, 7.1+
  • Enterprise Search: 7.7+
  • Beats: 7.0+
  • Elastic Agent: 7.10+
  • Elastic Maps Server: 7.11+

安裝步驟

1.在k8s中安裝部署ECK
2.使用ECK安裝部署Elasticsearch集群
3.安裝部署Kibana
4.升級部署
5.存儲
6.檢測

在k8s中安裝部署ECK

1.安裝自定義資源定義和運算符及其RBAC規則:

kubectl create -f https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml

有警告信息:

namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created

2.監控operator 日志

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

看日志的話也會顯示上述警告信息:

使用ECK部署es集群

注意:如果您的Kubernetes集群沒有任何Kubernetes節點具有至少2GiB的可用內存,pod將處於掛起狀態。

如下是創建一個pod的示范:

默認拉取的鏡像是從docker.elastic.co/elasticsearch/elasticsearch:7.13.1拉取的,若是拉入不到,可以先從dockerhub上拉取,然后tag
docker tag elasticsearch:7.12.1 docker.elastic.co/elasticsearch/elasticsearch:7.12.1

# cat es.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.13.1
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false

# kubectl create -f es.yaml
elasticsearch.elasticsearch.k8s.elastic.co/quickstart created

監視es運行狀況和創建進度

kubectl get elasticsearch

NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    1       7.12.1    Ready   5m33s

創建集群時,沒有運行狀況,並且階段為空。一段時間后,階段變成就緒,健康變成綠色。

可以看到一個Pod正在啟動過程中

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'

NAME                      READY   STATUS    RESTARTS   AGE
quickstart-es-default-0   1/1     Running   0          5m54s

查看pod日志

kubectl logs -f quickstart-es-default-0

訪問es

不建議使用-k標志禁用證書驗證,只應用於測試目的

kubectl get service quickstart-es-http

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quickstart-es-http   ClusterIP   10.3.255.226   <none>        9200/TCP   2m52s

# 獲取密碼,默認用戶名是elastic
PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

# 在Kubernetes 集群內部訪問
curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

# 在主機上訪問
curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

三個pod組成的集群,可以隨意擴展pod數量

# curl -u "elastic:$PASSWORD" -k "https://localhost:9200/_cat/nodes"
10.0.0.137 15 63 59 6.24 7.62 6.57 cdfhilmrstw - quickstart-es-default-2
10.0.1.219 30 64 46 4.80 3.21 2.53 cdfhilmrstw * quickstart-es-default-0
10.0.2.193 55 63 44 1.97 2.50 2.30 cdfhilmrstw - quickstart-es-default-1

部署kibana

# cat kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.13.1
  count: 1
  elasticsearchRef:
    name: quickstart

# kubectl create -f kibana.yaml 
kibana.kibana.k8s.elastic.co/quickstart created

監視kibana運行狀況和創建進度

# kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
NAME                             READY   STATUS    RESTARTS   AGE
quickstart-kb-5f844868fb-lrn2f   1/1     Running   1          4m26s

訪問kibana

# kubectl get service quickstart-kb-http
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
quickstart-kb-http   ClusterIP   10.3.255.47   <none>        5601/TCP   5m11s

# 本機訪問
# kubectl port-forward service/quickstart-kb-http 5601
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601

# 獲取默認用戶名elastic的密碼
# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

# 另一種訪問kibana的方法
默認創建的有一個名為quickstart-kb-http的svc,類型是ClusterIP,可以修改這個為NodePort,來實現通過宿主機ip:NodePort來訪問

升級部署

首先確保Kubernetes集群有足夠的資源來適應這些更改(額外的存儲空間、足夠的內存和CPU資源來臨時啟動新的pod等等)。
如下示例是把es集群中節點數量由1增加到3

# cat es.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.13.1
  nodeSets:
  - name: default
    count: 3
    config:
      node.store.allow_mmap: false

存儲

默認情況下,操作員為Elasticsearch集群中的每個pod創建一個容量為1Gi的PersistentVolumeClaim,以防止意外刪除pod時數據丟失。對於生產工作負載,應該使用所需的存儲容量和(可選)要與持久卷關聯的Kubernetes存儲類來定義自己的卷聲明模板。卷聲明的名稱必須始終是elasticsearch數據。

spec:
  nodeSets:
  - name: default
    count: 3
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        storageClassName: standard

如果擁有的Elasticsearch節點按比例縮小,則ECK會自動刪除PersistentVolumeClaim資源。根據配置的存儲類回收策略,可以保留相應的持久卷。
此外,如果您通過volumeClaimDeletePolicy屬性完全刪除Elasticsearch集群,您可以控制ECK應該如何處理PersistentVolumeClaims。

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es
spec:
  version: 7.13.1
  volumeClaimDeletePolicy: DeleteOnScaledownOnly
  nodeSets:
  - name: default
    count: 3

可能的值為DeleteOnScaleDownandClusterDelete和DeleteOnScaledownOnly。默認情況下,DeleteOnScaledownAndClusterDeletion生效,這意味着所有persistentVolumeClaimes都將與Elasticsearch集群一起刪除。但是,刪除Elasticsearch群集時,DeleteOnScaleDown僅保留PersistentVolumeClaims。如果使用與以前相同的名稱和節點集重新創建已刪除的群集,則新群集將采用現有的PersistentVolumeClaims。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM