監控工具
cAdvirsor:
推薦使用監控容器的工具,它是由 Google 開源的,在Kubernetes中,不需要單獨去安裝,cAdvisor 作為 kubelet 內置的一部分程序可以直接使用,主要是容器的CPU、內存、磁盤、網絡、負載等指標;
node-exporter:
宿主機監控工具,監控宿主機的CPU、內存、磁盤、網絡及可用性等指標;
kube-state-metrics:
它監聽API Server 生成有關資源對象的狀態指標,比如:Deployment 調度了多少個Pod副本、現在可用的有幾個、有多少個Pod是Running、stopped或terminated狀態、Pod重啟了多少次等等信息;
需要注意的是kube-state-metrics只是簡單提供了一個metrics指標數據,並不會存儲這些數據,需要后端數據庫來存儲這些數據,此外kube-state-metrics采集的是metrics數據的名稱和標簽是不固定的,可能會改變,需要根據實際環境靈活配置;
metrics-server:監控核心組件之一
metrics-server 它是集群范圍資源使用數據的聚合器,實現了Resource Metrics API,通過從 kubelet 公開的 Summary API 中采集指標信息,在 kubernetes 1.16 版本的時候kubernetes集群資源監控heaspter已經被廢棄了,現在采用 metrics-server 。
它負責從kubelet收集資源指標,然后對這些指標監控數據進行聚合(依賴kube-aggregator),並在Kubernetes Apiserver中通過Metrics API( /apis/metrics.k8s.io/
)公開暴露它們,但是metrics-server只存儲最新的指標數據(CPU/Memory)。但是並不會將指標轉發給第三方目標。如果使用Metrics-server需要對集群做一些特殊的配置,但是這些配置不是集群安裝時候默認配置好的,所以你的集群需要滿足這些要求:
-
你的kube-apiserver要能訪問到metrics-server
-
需要kube-apiserver啟用聚合層
-
組件要有kubectl的認證配置並且綁定到Metrics-server
-
Pod / Node指標需要由Summary API通過Kubelet公開
git clone https://github.com/kubernetes-sigs/metrics-server.git
在kube-apiserver中啟用聚合層,需要修改Kube-apiserver的一些配置選項,可以參考官方啟用聚合層文檔:
--requestheader-client-ca-file=<path to aggregator CA cert> --requestheader-allowed-names=front-proxy-client --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --proxy-client-cert-file=<path to aggregator proxy cert> --proxy-client-key-file=<path to aggregator proxy key>
Kubernetes集群中有些組件依賴資源指標API(metric API)的功能,比如 kubectl top
、HPA和VPA。如果沒有資源指標API接口,這些組件無法運行。
在Kubernetes集群中部署Metrics-server
# mkdir ./metrics-server # cd $_ # for file in aggregated-metrics-reader.yaml auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml; do wget https://raw.githubusercontent.com/kubernetes-sigs/metrics-server/master/deploy/kubernetes/$file;done
修改 metrics-server-deployment.yaml
清單文件
containers: - name: metrics-server image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-serveramd64:v0.3.6 command: - /metrics-server - --v=4 # 打印詳細日志為了debug,你也可以調成2 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname imagePullPolicy: Always
應用修改后的metrics-server配置清單
# kubectl apply -f .
驗證
第三方專用exporter:
還有很多專用的exporter,比如MySQL exporter、Redis exporter、Prometheus exporter等等
cAdvirsor
簡單說明
Prometheus 提供了幾種方法來監控 Docker 容器,包括一些自定義的 exporter,一般情況下不會使用這些 exporter,而是推薦使用 Google 的 cAdvisor,它是 Google 開源的、專門針對容器資源的監控和性能分析工具,可以單獨部署一個容器來運行 cAdvisor 進行采集監控指標數據, 但在 Kubernetes 集群中,不需要單獨去安裝,cAdvisor 已經作為 kubelet 程序內置的一部分,可以直接使用 cadvisor 采集與容器運行相關的所有指標數據,單獨安裝 cAdvisor 時數據采集路徑為/api/v1/nodes/[節點名稱]/proxy/metrics/cadvisor,如果是集成到kubelet的話,采集數據的路徑是https://127.0.0.1:10250/metrics/cadvisor。
下面我們針對 kubernetes 的使用進行演示,由於kubelet啟用了 https,所以需要擁有一個認證帳戶去訪問它,這里我們創建一個ServiceAccount賬號;
# 創建一個監控專用的名稱空間 monitor [root@master01 ~]# kubectl create ns monitor namespace/monitor created # 創建一個SA帳號 [root@master01 ~]# kubectl create serviceaccount monitor -n monitor serviceaccount/monitor created # 查看創建SA后,生成的 secret 信息 [root@master01 ~]# kubectl get secret -n monitor NAME TYPE DATA AGE default-token-kdrzm kubernetes.io/service-account-token 3 34s monitor-token-2ktr2 kubernetes.io/service-account-token 3 18s [root@master01 ~]# # SA:monitor 綁定最高集群角色 [root@master01 ~]# kubectl create clusterrolebinding monitor-cluster -n monitor --clusterrole=cluster-admin --serviceaccount=monitor:monitor clusterrolebinding.rbac.authorization.k8s.io/monitor-cluster created
驗證
根據創建 serviceAccount 帳號 monitor 的 token 去訪問 kubelet 的10250端口驗證
[root@master01 ~]# kubectl describe secret monitor-token-2ktr2 -n monitor Name: monitor-token-2ktr2 Namespace: monitor Labels: <none> Annotations: kubernetes.io/service-account.name: monitor kubernetes.io/service-account.uid: 718326e6-57ec-490c-9fcb-60698acca518 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 7 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlZ2bGJjaEN2MjFwazRmLUNWdkxBYVoxUHBleTBCUFBzWW0xU25uMGM1Y3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1vbml0b3ItdG9rZW4tMmt0cjIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibW9uaXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjcxODMyNmU2LTU3ZWMtNDkwYy05ZmNiLTYwNjk4YWNjYTUxOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptb25pdG9yOm1vbml0b3IifQ.cVml5Of1fZxyv-hRUKnqWWNK_52_btbdISvmP1Fw6Um-D9kqq5CieymC4f5KHVdxdJnA_-54ih3No5VUfetefBryh06yX_Qr01k0TGKKU_MwXcTgKgKs1Ydet7cS3VTBgZHNERdvHmK_phSnwEA87zJUkQNIMWPjTzsAUVlk0nve60MF-EohI_RqxILntlSKRpI5X5WG1p_IT7NebA5UYeKDYoabI9-YqoEPQd6XQ6Lfc5nf_tC1gUMExyaczVZTrsxjnpsZl5cFpAGg1b4NNixTLRbqWdeuu1uV5i_WJTlYMsfPNCvb2eP8KC9d0DE8UMSDNMwrehYyrmviAGqKVQ [root@master01 ~]#
訪問 kubelet 暴露的10250 端口
[root@master01 ~]# curl https://127.0.0.1:10250/metrics/cadvisor -k -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlZ2bGJjaEN2MjFwazRmLUNWdkxBYVoxUHBleTBCUFBzWW0xU25uMGM1Y3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1vbml0b3ItdG9rZW4tMmt0cjIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibW9uaXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjcxODMyNmU2LTU3ZWMtNDkwYy05ZmNiLTYwNjk4YWNjYTUxOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptb25pdG9yOm1vbml0b3IifQ.cVml5Of1fZxyv-hRUKnqWWNK_52_btbdISvmP1Fw6Um-D9kqq5CieymC4f5KHVdxdJnA_-54ih3No5VUfetefBryh06yX_Qr01k0TGKKU_MwXcTgKgKs1Ydet7cS3VTBgZHNERdvHmK_phSnwEA87zJUkQNIMWPjTzsAUVlk0nve60MF-EohI_RqxILntlSKRpI5X5WG1p_IT7NebA5UYeKDYoabI9-YqoEPQd6XQ6Lfc5nf_tC1gUMExyaczVZTrsxjnpsZl5cFpAGg1b4NNixTLRbqWdeuu1uV5i_WJTlYMsfPNCvb2eP8KC9d0DE8UMSDNMwrehYyrmviAGqKVQ" | more % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision. # TYPE cadvisor_version_info gauge cadvisor_version_info{cadvisorRevision="",cadvisorVersion="",dockerVersion="19.03.8",kernelVersion="3.10.0-1062.12.1.el7.x86_64",osVersion="CentOS Linux 7 (Core)"} 1 # HELP container_cpu_load_average_10s Value of container cpu load average over the last 10 seconds. # TYPE container_cpu_load_average_10s gauge container_cpu_load_average_10s{container="",id="/",image="",name="",namespace="",pod=""} 0 1585634068599 container_cpu_load_average_10s{container="",id="/kubepods",image="",name="",namespace="",pod=""} 0 1585634068611 container_cpu_load_average_10s{container="",id="/kubepods/besteffort",image="",name="",namespace="",pod=""} 0 1585634073752 。。。
通過上面的操作發現已經可以正常訪問容器的指標數據了,里面有很多指標數據,每個指標數據前都有兩行注意如:
# HELP container_cpu_load_average_10s Value of container cpu load average over the last 10 seconds.
# TYPE container_cpu_load_average_10s gauge
第一行是監控指標的解釋;
第二行是指標類型,是儀表盤、直方圖、摘要、計數器等;
node-exporter
安裝
這里把 node-exporter 部署為Pod,使用 DaemonSet 資源類型部署,方便維護,這樣每一個kubernetes 集群節點都會部署一個,資源配置文件清單如下:
[root@master01 monitor]# cat node-exporter.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: node-exporter namespace: monitor labels: name: node-exporter spec: selector: matchLabels: name: node-exporter template: metadata: labels: name: node-exporter spec: hostPID: true hostIPC: true hostNetwork: true containers: - name: node-exporter image: prom/node-exporter:latest ports: - containerPort: 9100 resources: requests: cpu: 0.15 securityContext: privileged: true args: - --path.procfs - /host/proc - --path.sysfs - /host/sys - --collector.filesystem.ignored-mount-points - '"^/(sys|proc|dev|host|etc)($|/)"' volumeMounts: - name: dev mountPath: /host/dev - name: proc mountPath: /host/proc - name: sys mountPath: /host/sys - name: rootfs mountPath: /rootfs tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" volumes: - name: proc hostPath: path: /proc - name: dev hostPath: path: /dev - name: sys hostPath: path: /sys - name: rootfs hostPath: path: /
這里使用hostnetwork為true,使用宿主機網絡,會監控在宿主機上面的9100口;
驗證
# 創建 DaemonSet 資源類型的 Pod [root@master01 monitor]# kubectl apply -f node-exporter.yaml daemonset.apps/node-exporter created [root@master01 monitor]# # 驗證 [root@master01 monitor]# curl http://127.0.0.1:9100/metrics|more % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP go_gc_duration_seconds A summary of the GC invocation durations. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 0 go_gc_duration_seconds{quantile="0.25"} 0 go_gc_duration_seconds{quantile="0.5"} 0 go_gc_duration_seconds{quantile="0.75"} 0 go_gc_duration_seconds{quantile="1"} 0 go_gc_duration_seconds_sum 0 go_gc_duration_seconds_count 0 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 6
查看pod
[root@master01 monitor]# kubectl get pods -n monitor -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-c67rd 1/1 Running 0 11m 172.31.117.228 node01 <none> <none> node-exporter-jrzfx 1/1 Running 0 11m 172.31.117.227 master03 <none> <none> node-exporter-mqsw5 1/1 Running 0 11m 172.31.117.225 master01 <none> <none> node-exporter-zhnl4 1/1 Running 0 11m 172.31.117.226 master02 <none> <none>
從上面可以看出,已經監控到所有宿主機 CPU、內存、負載、網絡流量、文件系統等指標信息,后續可供 Prometheus 收集。
kube-state-metrics
Kube-state-metrics 它是通過監聽 kube-apiserv括r 而生成有關資源對象的指標信息,主要包括Node、Pod、Service 、Endpoint、Namespace等資源的metric,需要注意的是kube-state-metrics只是簡單的提供一個metrics數據,並不會存儲這些指標數據,后續可以使用Prometheus 來抓取這些數據然后存儲,它主要關注的是業務資源workload的元數據信息。
這里也需要一個ServiceAccount帳戶並授權綁定。
[root@master01 monitor]# cat kube-state-metrics-rbac.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: kube-state-metrics namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kube-state-metrics rules: - apiGroups: [""] resources: ["nodes", "pods", "services", "resourcequotas", "replicationcontrollers", "limitranges", "persistentvolumeclaims", "persistentvolumes", "namespaces", "endpoints"] verbs: ["list", "watch"] - apiGroups: ["extensions"] resources: ["daemonsets", "deployments", "replicasets"] verbs: ["list", "watch"] - apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["list", "watch"] - apiGroups: ["batch"] resources: ["cronjobs", "jobs"] verbs: ["list", "watch"] - apiGroups: ["autoscaling"] resources: ["horizontalpodautoscalers"] verbs: ["list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kube-state-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kube-state-metrics subjects: - kind: ServiceAccount name: kube-state-metrics namespace: kube-system
創建 Pod 及service 配置文件
[root@master01 monitor]# cat kube-state-metrics-deployment-svc.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kube-state-metrics namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kube-state-metrics template: metadata: labels: app: kube-state-metrics spec: serviceAccountName: kube-state-metrics containers: - name: kube-state-metrics image: quay.io/coreos/kube-state-metrics:v1.9.5 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: 'true' name: kube-state-metrics namespace: kube-system labels: app: kube-state-metrics spec: ports: - name: kube-state-metrics port: 8080 protocol: TCP selector: app: kube-state-metrics
部署及查看
[root@master01 monitor]# kubectl apply -f kube-state-metrics-rbac.yaml serviceaccount/kube-state-metrics created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created [root@master01 monitor]# kubectl apply -f kube-state-metrics-deployment-svc.yaml deployment.apps/kube-state-metrics created service/kube-state-metrics created [root@master01 monitor]# # 查看部署情況 [root@master01 monitor]# kubectl get clusterrolebinding |grep kube-state kube-state-metrics ClusterRole/kube-state-metrics 4m4s [root@master01 monitor]# [root@master01 monitor]# kubectl get pods -n kube-system |grep kube-state-metrics kube-state-metrics-84b8477f75-65gcg 1/1 Running 0 4m26s
驗證
后面安裝完成prometheus后,在監控指標中有很多kube_開頭的指標數據,都是由它抓取生成的。
metrics-server
前期准備
在較早的版本中,集群監控使用的是 heaspter,集群通過它的監控指標實現HPA、VPA和kubectl top等,在新版本中由 metrics-server 替代,至於原因,可以Google一下。metrics-server 是 kubernetes 監控體系中的核心組件之一,從 kubelet 中收集 Pod/Node 等資源指標,然后對這些指標數據進行聚合,最后再通過 Kube-apiserver 中 Metrics API( /apis/metrics.k8s.io/)公開暴露,metrics-server只存儲最新的指標數據(CPU/Memory),並不會把指標數據轉發給第三方目標,如果想使用 Metrics-server 指標數據,就需要對集群做一些特殊的配置,這些配置默認情況下,是不會安裝的,具體配置如下幾點,1、kube-apiserver要能訪問到metrics-server;2、kube-apiserver啟用參數中啟用聚合層功能;3、組件要有kubectl的認證配置並且綁定到Metrics-server;4、Pod/Node指標需要由Summary API通過Kubelet公開。
[root@master01 ~]# cd /etc/kubernetes/manifests/ [root@master01 manifests]# ls kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml [root@master01 manifests]# pwd /etc/kubernetes/manifests
二進制安裝的話,進入到以上目錄,並修改kube-apiserver.yaml,主要是加上- --enable-aggregator-routing=true,其它的默認應該是有的,修改如下配置:
。。。 - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --enable-aggregator-routing=true 。。。。
下載軟件包
git clone https://github.com/kubernetes-sigs/metrics-server.git
安裝
# 下載目錄中有以下文件,可以自行查看下 [root@master01 kubernetes]# pwd /root/monitor/metrics-server/deploy/kubernetes [root@master01 kubernetes]# ll 總用量 28 -rw-r--r-- 1 root root 397 3月 31 14:23 aggregated-metrics-reader.yaml -rw-r--r-- 1 root root 303 3月 31 14:23 auth-delegator.yaml -rw-r--r-- 1 root root 324 3月 31 14:23 auth-reader.yaml -rw-r--r-- 1 root root 298 3月 31 14:23 metrics-apiservice.yaml -rw-r--r-- 1 root root 1184 3月 31 14:23 metrics-server-deployment.yaml -rw-r--r-- 1 root root 297 3月 31 14:23 metrics-server-service.yaml -rw-r--r-- 1 root root 532 3月 31 14:23 resource-reader.yaml [root@master01 kubernetes]# # 部署 [root@master01 kubernetes]# kubectl apply -f . clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@master01 kubernetes]#
坑一
root@master01 kubernetes]# kubectl top node error: metrics not available yet [root@master01 kubernetes]# # 查看錯誤日志 unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:master01: unable to fetch metrics from Kubelet master01 (master01): Get https://master01:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master01 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:master03: unable to fetch metrics from Kubelet master03 (master03): Get https://master03:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master03 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:master02: unable to fetch metrics from Kubelet master02 (master02): Get https://master02:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master02 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:node01: unable to fetch metrics from Kubelet node01 (node01): Get https://node01:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup node01 on 10.96.0.10:53: no such host]
這個坑的解決方式是 - --kubelet-insecure-tls ,修改 metrics-server-deployment.yaml 添加這個參數,刪除再重新創建
坑二
[root@master01 kubernetes]# kubectl top node Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) [root@master01 kubernetes]# [root@master01 kubernetes]# kubectl logs -f metrics-server-64b57fd654-bt6fx -n kube-system E0331 07:03:59.658787 1 reststorage.go:135] unable to fetch node metrics for node "master03": no metrics known for node E0331 07:03:59.658793 1 reststorage.go:135] unable to fetch node metrics for node "node01": no metrics known for node 。。。 [root@master01 kubernetes]#
解決方式是添加 - --kubelet-preferred-address-types=InternalIP 啟動參數 ,修改 metrics-server-deployment.yaml 添加這個參數,最終如下所示,再刪除重建即可
。。。 - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls 。。。
驗證
注意一下,剛開始有可能會出錯,稍等一下即可,出錯后,及時查看日志;
[root@master01 kubernetes]# kubectl top pods W0331 15:07:34.977285 30613 top_pod.go:274] Metrics not available for pod default/default-deployment-nginx-fffdfd45-vh8sc, age: 3h45m6.977273348s error: Metrics not available for pod default/default-deployment-nginx-fffdfd45-vh8sc, age: 3h45m6.977273348s [root@master01 kubernetes]# [root@master01 ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master01 179m 8% 2263Mi 61% master02 139m 6% 2184Mi 59% master03 146m 7% 2280Mi 61% node01 107m 5% 1825Mi 49% [root@master01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) default-deployment-nginx-fffdfd45-vh8sc 0m 1Mi [root@master01 ~]#