Prometheus監控k8s(13)-PrometheusOperator數據持久化


PrometheusOperator數據持久化

https://www.qikqiak.com/k8s-book/docs/60.Prometheus%20Operator%E9%AB%98%E7%BA%A7%E9%85%8D%E7%BD%AE.html

 

上面我們在修改完權限的時候,重啟了 Prometheus 的 Pod,如果我們仔細觀察的話會發現我們之前采集的數據已經沒有了,這是因為我們通過 prometheus 這個 CRD 創建的 Prometheus 並沒有做數據的持久化,我們可以直接查看生成的 Prometheus Pod 的掛載情況就清楚了:

kubectl get pod prometheus-k8s-0 -n monitoring -o yaml ...... volumeMounts: - mountPath: /etc/prometheus/config_out name: config-out readOnly: true - mountPath: /prometheus name: prometheus-k8s-db ...... volumes: ...... - emptyDir: {} name: prometheus-k8s-db ......

 

可以看到 Prometheus 的數據目錄 /prometheus 實際上是通過 emptyDir 進行掛載的,我們知道 emptyDir 掛載的數據的生命周期和 Pod 生命周期一致的,所以如果 Pod 掛掉了,數據也就丟失了,這也就是為什么我們重建 Pod 后之前的數據就沒有了的原因,對應線上的監控數據肯定需要做數據的持久化的,同樣的 prometheus 這個 CRD 資源也為我們提供了數據持久化的配置方法,由於我們的 Prometheus 最終是通過 Statefulset 控制器進行部署的,所以我們這里需要通過 storageclass 來做數據持久化,首先創建一個 StorageClass 對象:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: prometheus-data-db provisioner: fuseim.pri/ifs

 

 

這里我們聲明一個 StorageClass 對象,其中 provisioner=fuseim.pri/ifs,則是因為我們集群中使用的是 nfs 作為存儲后端,而前面我們課程中創建的 nfs-client-provisioner 中指定的 PROVISIONER_NAME 就為 fuseim.pri/ifs,這個名字不能隨便更改,將該文件保存為 prometheus-storageclass.yaml:

 

[root@k8s-master manifests]# vim prometheus-storageclass.yaml
[root@k8s-master manifests]# kubectl apply -f prometheus-storageclass.yaml storageclass.storage.k8s.io/prometheus-data-db created [root@k8s-master manifests]#

 

然后在 prometheus 的 CRD 資源對象中添加如下配置:

storage:
  volumeClaimTemplate:
    spec:
      storageClassName: prometheus-data-db resources: requests: storage: 10Gi

 

[root@k8s-master manifests]# kubectl get crd -n monitoringNAME                                    CREATED AT
alertmanagers.monitoring.coreos.com     2019-10-08T08:02:15Zpodmonitors.monitoring.coreos.com 2019-10-08T08:02:15Z prometheuses.monitoring.coreos.com 2019-10-08T08:02:15Zprometheusrules.monitoring.coreos.com 2019-10-08T08:02:15Z servicemonitors.monitoring.coreos.com 2019-10-08T08:02:16Z [root@k8s-master manifests]# [root@k8s-master manifests]# cat prometheus-prometheus.yaml apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: prometheus: k8s name: k8s namespace: monitoring spec: alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web  storage: volumeClaimTemplate: spec: storageClassName: prometheus-data-db resources: requests: storage: 10Gi baseImage: quay.io/prometheus/prometheus nodeSelector: beta.kubernetes.io/os: linux replicas: 2 secrets: - etcd-certs additionalScrapeConfigs: name: additional-configs key: prometheus-additional.yaml resources: requests: memory: 400Mi ruleSelector: matchLabels: prometheus: k8s role: alert-rules securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.11.0
[root@k8s-master manifests]# kubectl apply -f  prometheus-prometheus.yaml
prometheus.monitoring.coreos.com/k8s unchanged [root@k8s-master manifests]# [root@k8s-master manifests]# kubectl get pv -n monitoring | grep prometheus-k8s-dbmonitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-f318725c-a645-40a6-ba9f-01c274c0e603 10Gi RWO Delete B ound monitoring/prometheus-k8s-db-prometheus-k8s-0 prometheus-data-db 36smonitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-e6824b03-0bc9-4ad3-84e3-ec143002d0e4 10Gi RWO Delete B ound monitoring/prometheus-k8s-db-prometheus-k8s-1 prometheus-data-db 36s[root@k8s-master manifests]# kubectl get pvc -n monitoring NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-k8s-db-prometheus-k8s-0 Bound monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-f318725c-a645-40a6-ba9f-01c274c0e603 10Gi RWO prometheus-data-db 42s prometheus-k8s-db-prometheus-k8s-1 Bound monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-e6824b03-0bc9-4ad3-84e3-ec143002d0e4 10Gi RWO prometheus-data-db 42s [root@k8s-master manifests]#

 

 

 

現在我們再去看 Prometheus Pod 的數據目錄就可以看到是關聯到一個 PVC 對象上了。

kubectl get pod prometheus-k8s-0 -n monitoring -o yaml ...... volumeMounts: - mountPath: /etc/prometheus/config_out name: config-out readOnly: true - mountPath: /prometheus name: prometheus-k8s-db ...... volumes: ...... - name: prometheus-k8s-db persistentVolumeClaim: claimName: prometheus-k8s-db-prometheus-k8s-0 ......

 

現在即使我們的 Pod 掛掉了,數據也不會丟失了


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM