Kubernetes 系列(六):Kubernetes部署Prometheus監控


1.創建命名空間

新建一個yaml文件命名為monitor-namespace.yaml,寫入如下內容:

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring

執行如下命令創建monitoring命名空間:

kubectl create -f monitor-namespace.yaml

2.創建ClusterRole

你需要對上面創建的命名空間分配集群的讀取權限,以便Prometheus可以通過Kubernetes的API獲取集群的資源目標。

新建一個yaml文件命名為cluster-role.yaml,寫入如下內容:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: default
  namespace: monitoring

執行如下命令創建:

kubectl create -f cluster-role.yaml

 

3.創建Config Map

我們需要創建一個Config Map保存后面創建Prometheus容器用到的一些配置,這些配置包含了從Kubernetes集群中動態發現pods和運行中的服務。
新建一個yaml文件命名為config-map.yaml,寫入如下內容:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-server-conf
  labels:
    name: prometheus-server-conf
  namespace: monitoring
data:
  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s
    scrape_configs:
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
        - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https

      - job_name: 'kubernetes-nodes'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
        - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics
      
      - job_name: 'kubernetes-pods'
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name

      - job_name: 'kubernetes-cadvisor'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
        - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
      
      - job_name: 'kubernetes-service-endpoints'
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: kubernetes_name

執行如下命令進行創建:

kubectl create -f config-map.yaml -n monitoring

 

4.創建Deployment模式的Prometheus

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus:v2.3.2
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-server-conf  
        - name: prometheus-storage-volume
          emptyDir: {}

使用如下命令部署:

kubectl create -f prometheus-deployment.yaml --namespace=monitoring

部署完成后通過dashboard能夠看到如下的界面:

5.連接Prometheus

這里有兩種方式

1.通過kubectl命令進行端口代理

2.針對Prometheus的POD暴露一個服務,推薦此種方式
首先新建一個yaml文件命名為prometheus-service.yaml,寫入如下內容:

apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
spec:
  selector: 
    app: prometheus-server
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090 
      nodePort: 30909

執行如下命令創建服務:

kubectl create -f prometheus-service.yaml --namespace=monitoring

通過以下命令查看Service的狀態,我們可以看到暴露的端口是30909:

kubectl get svc -n monitoring
NAME                 TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
prometheus-service   NodePort   10.101.186.82    <none>        9090:30909/TCP   100m

現在可以通過瀏覽器訪問【http://虛擬機IP:30909】,看到如下界面,現在可以點擊 status –> Targets,馬上就可以看到所有Kubernetes集群上的Endpoint通過服務發現的方式自動連接到了Prometheus。:

我們還可以通過圖形化界面查看內存:

OK,到這里Prometheus部署就算完成了,但是數據的統計明顯不夠直觀,所以我們需要使用Grafana來構建更加友好的監控頁面。

 

6.搭建Grafana

新建以下yaml文件:grafana-dashboard-provider.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboard-provider
  namespace: monitoring
data:
  default-dashboard.yaml: |
    - name: 'default'
      org_id: 1
      folder: ''
      type: file
      options:
        folder: /var/lib/grafana/dashboards

grafana.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
        - image: grafana/grafana:5.0.0
          name: grafana
          ports:
          - containerPort: 3000
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
          - name: grafana-persistent-storage
            mountPath: /var
          - name: grafana-dashboard-provider
            mountPath: /etc/grafana/provisioning/dashboards
      volumes:
      - name: grafana-dashboard-provider
        configMap:
          name: grafana-dashboard-provider
      - name: grafana-persistent-storage
        emptyDir: {}

grafana-service.yaml:

apiVersion: v1
kind: Service
metadata:
  labels:
    name: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort
  selector:
    app: grafana
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
    nodePort: 30300

執行如下命令進行創建:

kubectl apply -f grafana-dashboard-provider.yaml 
kubectl apply -f grafana.yaml 
kubectl apply -f grafana-service.yaml

部署完成后通過Kubernetes Dashboard可以看到:

 

 根據服務暴露出來的端口30300通過瀏覽器訪問【http://虛擬機IP:30300】看到如下界面:

輸入用戶名和密碼(admin/admin)即可登錄。

接着我們配置數據源:

 

 然后導入Dashboards:

將JSON文件上傳

grafana-dashboard.json (百度雲鏈接 https://pan.baidu.com/s/1YtfD3s1U_d6Yon67qjihmw   密碼:n25f)

然后點擊導入:

然后就可以看到Kubernetes集群的監控數據了:

 

還有一個資源統計的Dashboards:

kubernetes-resources-usage-dashboard.json

 

OK,Prometheus的監控搭建到此結束。

 

參考資料:https://www.jianshu.com/p/c2e549480c50


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM