kubernetes1.11.1 部署prometheus


部署前提:已經安裝好了kubernetes的集群,版本是1.11.1,是用kubeadm部署的。

2台虛擬機:master:172.17.1.36      node1:172.17.1.40

prometheus 是kubernetes 監控,可以監控k8s的核心指標以及自定義指標

起官方地址:https://github.com/kubernetes/kubernetes/tree/release-1.11/cluster/addons/prometheus

第一步:

把官方所有的yaml文件下載下來

for i in alertmanager-configmap.yaml alertmanager-deployment.yaml alertmanager-pvc.yaml alertmanager-service.yaml kube-state-metrics-deployment.yaml kube-state-metrics-rbac.yaml kube-state-metrics-service.yaml node-exporter-ds.yml node-exporter-service.yaml prometheus-configmap.yaml prometheus-rbac.yaml prometheus-service.yaml prometheus-statefulset.yaml ;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.11/cluster/addons/prometheus/$i;done

里面文件具體分為:alertmanager, kube-state-metrics,node-export,prometheus,4個組件,最好是創建4個文件夾,把對應的yaml文件分類下,好處理

整個prometheus 我安裝在一個名稱空間,先創建個名稱空間prom   kubectl create ns prom 

最開始安裝的是node-export,它的作用是收集節點的數據,被prometheus采集的。

官方提供的node-export的yaml文件都是安裝在kube-system的名稱空間,所以需要修改下

node-exporter-ds.yml  如下:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter 
  namespace: prom   #這是修改成prom
  labels:
    k8s-app: node-exporter 
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.15.2
spec:
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        k8s-app: node-exporter 
        version: v0.15.2
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
#      priorityClassName: system-node-critical    #這行注釋,否則創建會報錯,具體的原因我還沒擦到
      tolerations:                    #加行master節點的污點容忍度,否則不會再master節點創建pod
      - key: node-role.kubernetes.io/master
      containers:
        - name: prometheus-node-exporter
          image: "prom/node-exporter:v0.15.2"
          imagePullPolicy: "IfNotPresent"
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
          ports:
            - name: metrics
              containerPort: 9100
              hostPort: 9100
          volumeMounts:
            - name: proc
              mountPath: /host/proc
              readOnly:  true
            - name: sys
              mountPath: /host/sys
              readOnly: true
          resources:
            limits:
              cpu: 10m
              memory: 50Mi
            requests:
              cpu: 10m
              memory: 50Mi
      hostNetwork: true
      hostPID: true
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys

node-exporter-service.yaml  文件只要修改名稱空間就可以了,kubectl apply -f node-exporter-ds.yml   node-exporter-service.yaml

如上,pod 核service創建完成。

第二部:部署prometheus

首先prometheus 需要持久存儲數據的,官方給的yaml文件中需要設置一個18G的大小的pv,這里我用的是nfs類型的存儲,設置pv大小事20G

再master節點安裝nfs:

yum install nfs-utils

vim /etc/exports

創建文件夾  mkdir  /data

systemctl  start nfs   && systemctl  enable nfs    

注意:node節點要執行  yum install nfs-utils  ,否則會出現掛載不上的情況,原因事沒有nfs的文件類型

創建pv  

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
  namespace: prom
  labels:
    name: pv01
spec:
  nfs:
    path: /data/
    server: 172.17.1.36
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  capacity:
    storage: 20Gi

kubectl apply -f pro_pv.yaml

這是綁定后的圖

 安裝prometheus  有以下4個文件:

prometheus-configmap.yaml  prometheus-rbac.yaml  這2個文件只要修改下名稱空間就可以了, prometheus-service.yaml 我添加了type,這樣外網可以fangwen

如下:

kind: Service
apiVersion: v1
metadata: 
  name: prometheus
  namespace: prom  #這里修改
labels: kubernetes.io
/name: "Prometheus" kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: type: NodePort #添加這type
ports:
- name: http port: 9090 protocol: TCP targetPort: 9090 selector: k8s-app: prometheus

prometheus-statefulset.yaml

 

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus 
  namespace: prom    #這里修改成prom
labels: k8s
-app: prometheus kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile version: v2.2.1 spec: serviceName: "prometheus" replicas: 1 podManagementPolicy: "Parallel" updateStrategy: type: "RollingUpdate" selector: matchLabels: k8s-app: prometheus template: metadata: labels: k8s-app: prometheus annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: # priorityClassName: system-cluster-critical #這樣注銷
serviceAccountName: prometheus initContainers:
- name: "init-chown-data" image: "busybox:latest" imagePullPolicy: "IfNotPresent" command: ["chown", "-R", "65534:65534", "/data"] volumeMounts: - name: prometheus-data mountPath: /data subPath: "" containers: - name: prometheus-server-configmap-reload image: "jimmidyson/configmap-reload:v0.1" imagePullPolicy: "IfNotPresent" args: - --volume-dir=/etc/config - --webhook-url=http://localhost:9090/-/reload volumeMounts: - name: config-volume mountPath: /etc/config readOnly: true resources: limits: cpu: 10m memory: 10Mi requests: cpu: 10m memory: 10Mi - name: prometheus-server image: "prom/prometheus:v2.2.1" imagePullPolicy: "IfNotPresent" args: - --config.file=/etc/config/prometheus.yml - --storage.tsdb.path=/data - --web.console.libraries=/etc/prometheus/console_libraries - --web.console.templates=/etc/prometheus/consoles - --web.enable-lifecycle ports: - containerPort: 9090 readinessProbe: httpGet: path: /-/ready port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 livenessProbe: httpGet: path: /-/healthy port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 # based on 10 running nodes with 30 pods each resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 200m memory: 1000Mi volumeMounts: - name: config-volume mountPath: /etc/config - name: prometheus-data mountPath: /data subPath: "" terminationGracePeriodSeconds: 300 volumes: - name: config-volume configMap: name: prometheus-config volumeClaimTemplates: - metadata: name: prometheus-data spec: accessModes: - ReadWriteOnce resources: requests: storage: "16Gi"

應用這4個文件,  kubectl  apply -f .

可以外文172.17.1.40:30793來訪問prometheus,

 

prometheus 本身自己有web 頁面,其也有很多生成的查詢條件

第三部:部署kube-state-metrics  ,這個組件的作用事將prometheus收集的數據,轉換成kubernetes 可以設別的數據類型

 

應用這3個文件。修改的地方是名稱空間,kube-state-metrics-deployment.yaml這個文件要注釋下面的一行:

查看下pod的狀態

說明安裝成功了

第四步:安裝prometheus-adapter,這個組件的作用是整合收集的數據到api

github 地址  https://github.com/DirectXMan12/k8s-prometheus-adapter

下載這些文件

需要修改下這些文件的名稱空間

應用這個文件之前,需要創建一個secret,文件中有用到這個,並且是kubernetes集群ca證書簽署的創建的secret

創建證書:

/etc/kubernetes/pki   在這個目錄創建


(umask 077;openssl genrsa -out serving.key 2048) 創建私鑰
openssl req -new -key serving.key -out serving.csr -subj "/CN=serving" 生產證書自簽請求
openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650 生成證書


創建一個secret
kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key

應用這些文件   kubectl apply -f .

有這個api說明安裝成功了

可以起個代理測試下如下:

kubectl proxy --port=8080

curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1

 

 第五步:部署grafana

yaml文件如下:

 1 apiVersion: extensions/v1beta1
 2 kind: Deployment
 3 metadata:
 4   name: monitoring-grafana
 5   namespace: prom
 6 spec:
 7   replicas: 1
 8   template:
 9     metadata:
10       labels:
11         task: monitoring
12         k8s-app: grafana
13     spec:
14       containers:
15       - name: grafana
16         image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
17         ports:
18         - containerPort: 3000
19           protocol: TCP
20         volumeMounts:
21         - mountPath: /etc/ssl/certs
22           name: ca-certificates
23           readOnly: true
24         - mountPath: /var
25           name: grafana-storage
26         env:
27 #        - name: INFLUXDB_HOST
28 #          value: monitoring-influxdb
29         - name: GF_SERVER_HTTP_PORT
30           value: "3000"
31           # The following env variables are required to make Grafana accessible via
32           # the kubernetes api-server proxy. On production clusters, we recommend
33           # removing these env variables, setup auth for grafana, and expose the grafana
34           # service using a LoadBalancer or a public IP.
35         - name: GF_AUTH_BASIC_ENABLED
36           value: "false"
37         - name: GF_AUTH_ANONYMOUS_ENABLED
38           value: "true"
39         - name: GF_AUTH_ANONYMOUS_ORG_ROLE
40           value: Admin
41         - name: GF_SERVER_ROOT_URL
42           # If you're only using the API Server proxy, set this value instead:
43           # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
44           value: /
45       volumes:
46       - name: ca-certificates
47         hostPath:
48           path: /etc/ssl/certs
49       - name: grafana-storage
50         emptyDir: {}
51 ---
52 apiVersion: v1
53 kind: Service
54 metadata:
55   labels:
56     # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
57     # If you are NOT using this as an addon, you should comment out this line.
58     kubernetes.io/cluster-service: 'true'
59     kubernetes.io/name: monitoring-grafana
60   name: monitoring-grafana
61   namespace: prom
62 spec:
63   type: NodePort
64   # In a production setup, we recommend accessing Grafana through an external Loadbalancer
65   # or through a public IP.
66   # type: LoadBalancer
67   # You could also use NodePort to expose the service at a randomly-generated port
68   # type: NodePort
69   ports:
70   - port: 80
71     targetPort: 3000
72   selector:
73     k8s-app: grafana

需要用到

k8s.gcr.io/heapster-grafana-amd64:v5.0.4 這個鏡像,要翻牆下

應用grafana 文件

說明安裝成功

grafana的界面

總結:Prometheus部署步驟有點多,我是安裝github的提供的文件安裝的,期間遇到各種各樣的問題,都吐血了。在出現作錯誤的時候,首先要看log日志,一般情況都能解決,要么就是版本的問題。以后寫個排錯的,不然自己長時間也忘了怎么弄。也沒仔細排版,以后在整理吧!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM