第十章 Kubernetes進階之k8s集群資源監控


第十章 Kubernetes進階之k8s集群資源監控

 

  參考:https://www.jianshu.com/p/91f9d9ec374f

      https://www.cnblogs.com/zealousness/p/11174365.html

  1.Kubernetes監控指標

  集群監控

  • 節點資源利用率
  • 節點數
  • 運行Pods

  Pod監控

  • Kubernetes指標
  • 容器指標
  • 應用程序

  2.Kubernetes監控方案

監控方案 告警 特點 適用
Zabbix Y  大量定制工作 大部分互聯網公司
open-falcon Y 功能模塊分解比較細顯得復雜 系統和應用監控
cAdvisor+Heapster+InfluxDB+Grafana Y 簡單易用 容器監控
cAdvisor/exporter+Prometheus+Granfana Y 擴展性好 容器,應用,主機全方面監控

 

  3.Heapster+InfluDB+Grafana

  監控構架圖

   1、cAdvisor為谷歌開源的專門用於監控容器的服務,已經集成到了k8s里面(數據采集Agent)

Kubernetes有個出名的監控agent—cAdvisor。在每個kubernetes Node上都會運行cAdvisor,它會收集本機以及容器的監控數據(cpu,memory,filesystem,network,uptime)。在較新的版本中,K8S已經將cAdvisor功能集成到kubelet組件中。每個Node節點可以直接進行web訪問。
  2、Heapster是容器集群監控和性能分析工具,天然的支持Kubernetes和CoreOS。但是Heapster已經退休了!(數據收集)

Heapster是一個收集者,Heapster可以收集Node節點上的cAdvisor數據,將每個Node上的cAdvisor的數據進行匯總,還可以按照kubernetes的資源類型來集合資源,比如Pod、Namespace,可以分別獲取它們的CPU、內存、網絡和磁盤的metric。默認的metric數據聚合時間間隔是1分鍾。還可以把數據導入到第三方工具(如InfluxDB)。

  3、InfluxDB是一個開源的時序數據庫。(數據存儲)

  4、grafana是一個開源的數據展示工具。(數據展示)

  部署監控

  由於cAdvisor已經在k8s里面集成了,其他部件部署順序:influxDB->Heapster->grafana

  1,部署influxDB

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# cat influxdb.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: monitoring-influxdb
   namespace : kube-system
spec:
   replicas: 1
   selector:
     matchLabels:
       task: monitoring
       k8s-app: influxdb
   template:
     metadata:
       labels:
         task: monitoring
         k8s-app: influxdb
     spec:
       containers:
       - name: influxdb
         #image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
         image: statemood/heapster-influxdb-amd64:v1.3.3
         volumeMounts:
         - mountPath: /data
           name: influxdb-storage
       volumes:
       - name: influxdb-storage
         emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
   labels:
     task: monitoring
     kubernetes.io/cluster-service:  'true'
     kubernetes.io/name: monitoring-influxdb
   name: monitoring-influxdb
   namespace : kube-system
spec:
   type: NodePort
   ports:
   - nodePort: 31001
     port: 8086
     targetPort: 8086
   selector:
     k8s-app: influxdb

   2,部署heapster

  Heapster首先從apiserver獲取集群中所有Node的信息,然后通過這些Node上的kubelet獲取有用數據,而kubelet本身的數據則是從cAdvisor得到。所有獲取到的數據都被推到Heapster配置的后端存儲中,並還支持數據的可視化。
  由於Heapster需要從apiserver獲取數據,所以需要對其進行授權。用戶為cluster-admin,集群管理員用戶。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# cat heapster.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
   name: heapster
   namespace : kube-system
  
---
  
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
   name: heapster
roleRef:
   kind: ClusterRole
   name: cluster-admin
   apiGroup: rbac.authorization.k8s.io
subjects:
   - kind: ServiceAccount
     name: heapster
     namespace : kube-system
  
---
  
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
   name: heapster
   namespace : kube-system
spec:
   selector:
     matchLabels:
       k8s-app: heapster
   replicas: 1
   template:
     metadata:
       labels:
         task: monitoring
         k8s-app: heapster
     spec:
       serviceAccountName: heapster
       containers:
       - name: heapster
         image: registry.cn-hangzhou.aliyuncs.com/google-containers/heapster-amd64:v1.4.2
         imagePullPolicy: IfNotPresent
         command:
         - /heapster
         - --source=kubernetes:https: //kubernetes.default
         - --sink=influxdb:http: //monitoring-influxdb:8086
  
---
  
apiVersion: v1
kind: Service
metadata:
   labels:
     task: monitoring
     kubernetes.io/cluster-service:  'true'
     kubernetes.io/name: Heapster
   name: heapster
   namespace : kube-system
spec:
   ports:
   - port: 80
     targetPort: 8082
   selector:
     k8s-app: heapster

   3,部署grafana

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# cat grafana.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
   name: monitoring-grafana
   namespace : kube-system
spec:
   replicas: 1
   selector:
     matchLabels:
       task: monitoring
       k8s-app: grafana
   template:
     metadata:
       labels:
         task: monitoring
         k8s-app: grafana
     spec:
       containers:
       - name: grafana
         #image: k8s.gcr.io/heapster-grafana-amd64:v4.4.3
         image: pupudaye/heapster-grafana-amd64:v4.4.3
         ports:
         - containerPort: 3000
           protocol: TCP
         volumeMounts:
         - mountPath: /etc/ssl/certs
           name: ca-certificates
           readOnly:  true
         - mountPath: / var
           name: grafana-storage
         env:
         - name: INFLUXDB_HOST
           value: monitoring-influxdb
         - name: GF_SERVER_HTTP_PORT
           value:  "3000"
           # The following env variables are required to make Grafana accessible via
           # the kubernetes api-server proxy. On production clusters, we recommend
           # removing these env variables, setup auth for grafana, and expose the grafana
           # service using a LoadBalancer or a public IP.
         - name: GF_AUTH_BASIC_ENABLED
           value:  "false"
         - name: GF_AUTH_ANONYMOUS_ENABLED
           value:  "true"
         - name: GF_AUTH_ANONYMOUS_ORG_ROLE
           value: Admin
         - name: GF_SERVER_ROOT_URL
           # If you're only using the API Server proxy, set this value instead:
           # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
           value: /
       volumes:
       - name: ca-certificates
         hostPath:
           path: /etc/ssl/certs
       - name: grafana-storage
         emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
   labels:
     # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
     # If you are NOT using this as an addon, you should comment out this line.
     kubernetes.io/cluster-service:  'true'
     kubernetes.io/name: monitoring-grafana
   name: monitoring-grafana
   namespace : kube-system
spec:
   # In a production setup, we recommend accessing Grafana through an external Loadbalancer
   # or through a public IP.
   # type: LoadBalancer
   # You could also use NodePort to expose the service at a randomly-generated port
   type: NodePort
   ports:
   - nodePort: 30108
     port: 80
     targetPort: 3000
   selector:
     k8s-app: grafana

   自定義了端口為30108

  通過30108端口訪問

  自帶cluser和pods模板

 

   Pods


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM