3--prometheus監控ETCD


一、prometheus監控ETCD

流程

  • 1、通過EndPoints獲取需要監控的ETCD的地址

  • 2、創建Service,給予集群內部的ServiceMoniter使用

  • 3、創建ServiceMoniter,部署需要訪問證書,給予prometheus-k8s-0來使用

  • 4、重啟普羅米修斯監控Pod(prometheus-k8s-0),載入監控項

1.測試ETCD服務的metrice接口是否可用

[root@k8s-master-01 ~]# curl -k --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key https://127.0.0.1:2379/metrics

2.通過EndPoints獲取需要監控的ETCD的地址

endpoint是k8s集群中的一個資源對象,存儲在etcd中,用來記錄一個service對應的所有pod的訪問地址。service配置selector,endpoint controller才會自動創建對應的endpoint對象;否則,不會生成endpoint對象。我們可手動創建

准備

# 創建目錄
[root@k8s-master-01 /]# mkdir etcd-monitor
[root@k8s-master-01 /]# cd etcd-monitor/
[root@k8s-master-01 /etcd-monitor]# cat 1_etcd-endpoints.yaml 
kind: Endpoints
apiVersion: v1
metadata:
  namespace: kube-system
  name: etcd-moniter
  labels:
    k8s: etcd
subsets:
  - addresses:
      - ip: "192.168.15.31"
    ports:
      - port: 2379
        protocol: TCP
        name: etcd

解釋

# EndPoints和Service之間是通過名稱進行關聯的(Service和EndPints名字和命名空間只要相同,則自動關聯)
# 創建一個endpoints資源用於指定ETCD地址
kind: EndPoints
apiVersion: v1
metadata:
  # etcd所在的名稱空間
  namespace: kube-system
  # 給endpoints資源起個名字
  name: etcd-moniter
  # 標簽
  labels:
    k8s: etcd
# 指定pod地址
subsets:
  # 就緒的IP地址
  - addresses:
      - ip: "192.168.15.31"
      - ip: "192.168.15.11"  #如果多個master可這樣寫
    # ip地址可用的端口號
    ports:
      - port: 2379
        # 指定端口協議
        protocol: TCP
        name: etcd

創建結果

[root@k8s-master-01 ~]# kubectl apply -f ./
endpoints/etcd-moniter created

[root@k8s-master-01 ~]# kubectl get endpoints -n kube-system 
NAME           ENDPOINTS                                             AGE
etcd-moniter   192.168.15.31:2379                                    5s
...

3.創建service,給予集群內部的serviceMoniter使用

serviceMoniter是通過service獲取數據的一種方式

1. promethus-operator可以通過serviceMonitor 自動識別帶有某些 label 的service ,並從這些service 獲取數據。
2. serviceMonitor 也是由promethus-operator 自動發現的。
[root@k8s-master-01 /etcd-monitor]# vi 2_etcd-service.yaml
kind: Service
apiVersion: v1
metadata:
  namespace: kube-system
  name: etcd-moniter
  labels:
    k8s: etcd
spec:
  ports:
    - port: 2379
      targetPort: 2379
      name: etcd
      protocol: TCP
      
[root@k8s-master-01 /etcd-monitor]# kubectl apply -f 2_etcd_service.yaml 
service/etcd-moniter created

解釋

kind: Service
apiVersion: v1
metadata:
  # 指定名稱空間
  namespace: kube-system
  # 要和上面的endpoints名字一樣這樣才能連接到一起
  name: etcd-moniter
  # 標簽
  labels:
    k8s: etcd
spec:
  ports:
    # 要Service暴露的端口
    - port: 2379
      # pod暴露的端口
      targetPort: 2379
      name: etcd
      # 指定端口協議
      protocol: TCP

查看

[root@k8s-master-01 ~]# kubectl get svc -n kube-system 
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGE
etcd-moniter   ClusterIP   10.108.221.188   <none>        2379/TCP                       36s
...
[root@k8s-m-01 etcd-monitor]# kubectl describe svc -n kube-system etcd-moniter
Name:              etcd-moniter
Namespace:         kube-system
Labels:            k8s=etcd
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.110.236.118
IPs:               10.110.236.118
Port:              etcd  2379/TCP
TargetPort:        2379/TCP
Endpoints:         192.168.15.31:2379
Session Affinity:  None
Events:            <none>

[root@k8s-m-01 etcd-monitor]# kubectl get servicemonitors.monitoring.coreos.com -n monitoring  #prometheusd的監控項
NAME                      AGE
alertmanager              2d21h
coredns                   2d21h
grafana                   2d21h
kube-apiserver            2d21h
kube-controller-manager   2d21h
kube-scheduler            2d21h
kube-state-metrics        2d21h
kubelet                   2d21h
node-exporter             2d21h
prometheus                2d21h
prometheus-adapter        2d21h
prometheus-operator       2d21h

測試

[root@k8s-master-01 ~]# curl -k --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key https://10.110.236.118:2379/metrics

4.創建serviceMoniter部署需要訪問證書

(先執行第5步的1,創建證書)

[root@k8s-master-01 /etcd-monitor]# vim 3_etcd-Moniter.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  labels:
    k8s: etcd
  name: etcd-monitor
  namespace: monitoring
spec:
  endpoints:
  - interval: 3s
    port: etcd
    scheme: https
    tlsConfig:
      caFile: /etc/prometheus/secrets/etcd-certs/ca.crt
      certFile: /etc/prometheus/secrets/etcd-certs/peer.crt
      keyFile: /etc/prometheus/secrets/etcd-certs/peer.key
      insecureSkipVerify: true
  selector:
    matchLabels:
      k8s: etcd
  namespaceSelector:
    matchNames:
      - "kube-system"
      
[root@k8s-master-01 /etcd-monitor]# kubectl apply -f 3_etcd-Moniter.yaml
servicemonitor.monitoring.coreos.com/etcd-monitor created

解釋

kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  # 標簽
  labels:
    k8s: etcd
  name: etcd-monitor
  # 給prometheus使用所以和prometheus同命名空間,讓他讀到並自動發現
  namespace: monitoring
spec:
  endpoints:
  #抓取數據頻率
  - interval: 3s
    # 和endpoins端口名相同
    port: etcd #寫2379也行
    scheme: https
    # 設置證書
    tlsConfig:
      caFile: /etc/prometheus/secrets/etcd-certs/ca.crt #容器里面的路徑
      certFile: /etc/prometheus/secrets/etcd-certs/peer.crt
      keyFile: /etc/prometheus/secrets/etcd-certs/peer.key
      # 禁用目標證書驗證
      insecureSkipVerify: true
  # 選擇endpoints
  selector:
    # endpoints標簽
    matchLabels:  # 跟svc的標簽一樣
      k8s: etcd
  # 選擇endpoints所在的命名空間
  namespaceSelector:
    matchNames:
      - "kube-system"

查看

[root@k8s-master-01 ~]# kubectl get ServiceMonitor -n monitoring 
NAME                      AGE
etcd-monitor              3m3s

[root@k8s-m-01 etcd-monitor]# kubectl describe servicemonitors.monitoring.coreos.com -n monitoring etcd-monitor
... ...
  Selector:
    Match Labels:
      k8s:  etcd  #注意k千萬不能大寫
Events:     <none>

5.重啟普羅米修斯監控pod(prometheus-k8s-0),載入監控項

1)創建一個secrets,用來保存prometheus監控的etcd的證書

[root@k8s-master-01 ~]# kubectl create secret generic etcd-certs -n monitoring --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/peer.crt --from-file=/etc/kubernetes/pki/etcd/peer.key

檢查

[root@k8s-master-01 ~]# kubectl get secrets -n monitoring 
NAME                              TYPE                                  DATA   AGE
...
etcd-certs                        Opaque                                3      32s
...

[root@k8s-m-01 etcd-monitor]# kubectl get svc -l k8s=etcd -n kube-system
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
etcd-moniter   ClusterIP   10.110.236.118   <none>        2379/TCP   24m

2)修改prometheus的yaml

[root@k8s-master-01 ~]# cd kube-prometheus-0.7.0/manifests/

修改配置文件

[root@k8s-master-01 /etcd-monitor]# vim prometheus-prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
    - name: alertmanager-main
      namespace: monitoring
      port: web
    - name: alertmanager-main-etcd
      namespace: kube-system
      port: etcd
  image: quay.io/prometheus/prometheus:v2.22.1
  nodeSelector:
    kubernetes.io/os: linux
  podMonitorNamespaceSelector: {}
  podMonitorSelector: {}
  probeNamespaceSelector: {}
  probeSelector: {}
  replicas: 2
  resources:
    requests:
      memory: 400Mi
  ruleSelector:
    matchLabels:
      prometheus: k8s
      role: alert-rules
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.22.1
  secrets:    #增加
    - etcd-certs

解釋

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  # 標簽
  labels:
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  # 定義有關警報的詳細信息
  alerting:
    alertmanagers:
    - name: alertmanager-main
      namespace: monitoring
      port: web
    - name: alertmanager-main-etcd
      namespace: kube-system
      port: etcd
  image: quay.io/prometheus/prometheus:v2.22.1
  nodeSelector:
    kubernetes.io/os: linux
  podMonitorNamespaceSelector: {}
  podMonitorSelector: {}
  probeNamespaceSelector: {}
  probeSelector: {}
  replicas: 2
  resources:
    requests:
      memory: 400Mi
  ruleSelector:
    matchLabels:
      prometheus: k8s
      role: alert-rules
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.22.1
  # 指定secrets對象
  secrets:
    - etcd-certs

重啟

[root@k8s-master-01 /etcd-monitor]# kubectl apply -f prometheus-prometheus.yaml 
prometheus.monitoring.coreos.com/k8s unchanged
 
## 檢查
[root@k8s-master-01 /etcd-monitor]# kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
...
prometheus-k8s-0                       2/2     Running   5          47h
prometheus-k8s-1                       2/2     Running   5          47h
...

# 進入容器查看是否掛載了證書
[root@k8s-m-01 manifests]# kubectl exec -it -n monitoring prometheus-k8s-0 prometheus -- sh
Defaulted container "prometheus"  out of: prometheus, config-reloader
/prometheus $ cd /etc/prometheus/
/etc/prometheus $ ls
certs              console_libraries  prometheus.yml     secrets
config_out         consoles           rules
/etc/prometheus $ cd secrets/
/etc/prometheus/secrets $ ls
etcd-certs
/etc/prometheus/secrets $ cd etcd-certs/
/etc/prometheus/secrets/etcd-certs $ ls
ca.crt    peer.crt  peer.key

檢查

1、訪問
    http://linux.prometheus.com:31197/
2、獲取指標
    promhttp_metric_handler_requests_total{code="200"}

二、grafana出圖

1.隨機選擇一個dashboards

2.grafana添加根據ID添加dashboard

import


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM