0x00 概述
Prometheus 是一個開源和社區驅動的監控&報警&時序數據庫的項目。來源於谷歌BorgMon項目。現在最常見的Kubernetes容器管理系統中,通常會搭配Prometheus進行監控。主要監控:
- Node:如主機CPU,內存,網絡吞吐和帶寬占用,磁盤I/O和磁盤使用等指標。node-exporter采集。
- 容器關鍵指標:集群中容器的CPU詳細狀況,內存詳細狀況,Network,FileSystem和Subcontainer等。通過cadvisor采集。
- Kubernetes集群上部署的應用:監控部署在Kubernetes集群上的應用。主要是pod,service,ingress和endpoint。通過black-box和kube-apiserver的接口采集。
prometheus自身提供了一些資源的自動發現功能,下面是我從官方github上截圖,羅列了目前提供的資源發現:
由上圖可知prometheus自身提供了自動發現kubernetes的監控目標的功能。相應,配置文件官方也提供了一份,今天我們就解讀一下該配置文件。
0x01 配置文件解讀
首先直接上官方的配置文件:
# A scrape configuration for running Prometheus on a Kubernetes cluster. # This uses separate scrape configs for cluster components (i.e. API server, node) # and services to allow each to use different authentication configs. # # Kubernetes labels will be added as Prometheus labels on metrics via the # `labelmap` relabeling action. # # If you are using Kubernetes 1.7.2 or earlier, please take note of the comments # for the kubernetes-cadvisor job; you will need to edit or remove this job. # Scrape config for API servers. # # Kubernetes exposes API servers as endpoints to the default/kubernetes # service so this uses `endpoints` role and uses relabelling to only keep # the endpoints associated with the default/kubernetes service using the # default named port `https`. This works for single API server deployments as # well as HA API server deployments. scrape_configs: - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # If your node certificates are self-signed or use a different CA to the # master CA, then disable certificate verification below. Note that # certificate verification is an integral part of a secure infrastructure # so this should only be disabled in a controlled environment. You can # disable certificate verification by uncommenting the line below. # # insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # Keep only the default/kubernetes service endpoints for the https port. This # will add targets for each API server which Kubernetes adds an endpoint to # the default/kubernetes service. relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https # Scrape config for nodes (kubelet). # # Rather than connecting directly to the node, the scrape is proxied though the # Kubernetes apiserver. This means it will work if Prometheus is running out of # cluster, or can't connect to nodes for some other reason (e.g. because of # firewalling). - job_name: 'kubernetes-nodes' # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics # Scrape config for Kubelet cAdvisor. # # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics # (those whose names begin with 'container_') have been removed from the # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to # retrieve those metrics. # # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with # the --cadvisor-port=0 Kubelet flag). # # This job is not necessary and should be removed in Kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice. - job_name: 'kubernetes-cadvisor' # Default to scraping over https. If required, just disable this or change to # `http`. scheme: https # This TLS & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. This is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # Prometheus. The discovery auth config is automatic if Prometheus runs inside # the cluster. Otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor # Scrape config for service endpoints. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/scrape`: Only scrape services that have a value of `true` # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need # to set this to `https` & most likely set the `tls_config` of the scrape config. # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: If the metrics are exposed on a different port to the # service then set this appropriately. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name # Example scrape config for probing services via the Blackbox Exporter. # # The relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-services' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name # Example scrape config for probing ingresses via the Blackbox Exporter. # # The relabeling allows the actual ingress scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: Only probe services that have a value of `true` - job_name: 'kubernetes-ingresses' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: ingress relabel_configs: - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path] regex: (.+);(.+);(.+) replacement: ${1}://${2}${3} target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_ingress_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_ingress_name] target_label: kubernetes_name # Example scrape config for pods # # The relabeling allows the actual pod scrape endpoint to be configured via the # following annotations: # # * `prometheus.io/scrape`: Only scrape pods that have a value of `true` # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the # pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name
當然該配置文件,是在prometheus部署在k8s中生效的,即in-cluster模式。
0x02 kubernetes-apiservers
該項主要是讓prometheus程序可以訪問kube-apiserver,進而進行服務發現。看一下服務發現的代碼可以看出,主要服務發現:node,service,ingress,pod。
switch d.role { case "endpoints": var wg sync.WaitGroup for _, namespace := range namespaces { elw := cache.NewListWatchFromClient(rclient, "endpoints", namespace, nil) slw := cache.NewListWatchFromClient(rclient, "services", namespace, nil) plw := cache.NewListWatchFromClient(rclient, "pods", namespace, nil) eps := NewEndpoints( log.With(d.logger, "role", "endpoint"), cache.NewSharedInformer(slw, &apiv1.Service{}, resyncPeriod), cache.NewSharedInformer(elw, &apiv1.Endpoints{}, resyncPeriod), cache.NewSharedInformer(plw, &apiv1.Pod{}, resyncPeriod), ) go eps.endpointsInf.Run(ctx.Done()) go eps.serviceInf.Run(ctx.Done()) go eps.podInf.Run(ctx.Done()) for !eps.serviceInf.HasSynced() { time.Sleep(100 * time.Millisecond) } for !eps.endpointsInf.HasSynced() { time.Sleep(100 * time.Millisecond) } for !eps.podInf.HasSynced() { time.Sleep(100 * time.Millisecond) } wg.Add(1) go func() { defer wg.Done() eps.Run(ctx, ch) }() } wg.Wait() case "pod": var wg sync.WaitGroup for _, namespace := range namespaces { plw := cache.NewListWatchFromClient(rclient, "pods", namespace, nil) pod := NewPod( log.With(d.logger, "role", "pod"), cache.NewSharedInformer(plw, &apiv1.Pod{}, resyncPeriod), ) go pod.informer.Run(ctx.Done()) for !pod.informer.HasSynced() { time.Sleep(100 * time.Millisecond) } wg.Add(1) go func() { defer wg.Done() pod.Run(ctx, ch) }() } wg.Wait() case "service": var wg sync.WaitGroup for _, namespace := range namespaces { slw := cache.NewListWatchFromClient(rclient, "services", namespace, nil) svc := NewService( log.With(d.logger, "role", "service"), cache.NewSharedInformer(slw, &apiv1.Service{}, resyncPeriod), ) go svc.informer.Run(ctx.Done()) for !svc.informer.HasSynced() { time.Sleep(100 * time.Millisecond) } wg.Add(1) go func() { defer wg.Done() svc.Run(ctx, ch) }() } wg.Wait() case "ingress": var wg sync.WaitGroup for _, namespace := range namespaces { ilw := cache.NewListWatchFromClient(reclient, "ingresses", namespace, nil) ingress := NewIngress( log.With(d.logger, "role", "ingress"), cache.NewSharedInformer(ilw, &extensionsv1beta1.Ingress{}, resyncPeriod), ) go ingress.informer.Run(ctx.Done()) for !ingress.informer.HasSynced() { time.Sleep(100 * time.Millisecond) } wg.Add(1) go func() { defer wg.Done() ingress.Run(ctx, ch) }() } wg.Wait() case "node": nlw := cache.NewListWatchFromClient(rclient, "nodes", api.NamespaceAll, nil) node := NewNode( log.With(d.logger, "role", "node"), cache.NewSharedInformer(nlw, &apiv1.Node{}, resyncPeriod), ) go node.informer.Run(ctx.Done()) for !node.informer.HasSynced() { time.Sleep(100 * time.Millisecond) } node.Run(ctx, ch) default: level.Error(d.logger).Log("msg", "unknown Kubernetes discovery kind", "role", d.role) }
0x03 kubernetes-nodes
發現node以后,通過/api/v1/nodes/${1}/proxy/metrics來獲取node的metrics。
0x04 kubernetes-cadvisor
cadvisor已經被集成在kubelet中,所以發現了node就相當於發現了cadvisor。通過 /api/v1/nodes/${1}/proxy/metrics/cadvisor采集容器指標。
0x05 kubernetes-services和kubernetes-ingresses
該兩種資源監控方式差不多,都是需要安裝black-box,然后類似於探針去定時訪問,根據返回的http狀態碼來判定service和ingress的服務可用性。
PS:不過我自己在這里和官方的稍微有點區別,
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
官方大致是需要我們要創建black-box 的ingress從外部訪問,這樣從效率和安全性都不是最合適的。所以我一般都是直接內部dns訪問。如下
- target_label: __address__
replacement: blackbox-exporter.kube-system:9115
當然看源碼可以發現,並不是所有的service和ingress都會健康監測,如果需要將服務進行健康監測,那么你部署應用的yaml文件加一些注解。例如:
對於service和ingress:
需要加注解:prometheus.io/scrape: 'true'
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
name: prometheus-node-exporter
namespace: kube-system
labels:
app: prometheus
component: node-exporter
spec:
clusterIP: None
ports:
- name: prometheus-node-exporter
port: 9100
protocol: TCP
selector:
app: prometheus
component: node-exporter
type: ClusterIP
0x06 kubernetes-pods
對於pod的監測也是需要加注解:
- prometheus.io/scrape,為true則會將pod作為監控目標。
- prometheus.io/path,默認為/metrics
- prometheus.io/port , 端口
所以看到此處可以看出,該job並不是監控pod的指標,pod已經通過前面的cadvisor采集。此處是對pod中應用的監控。寫過exporter的人應該對這個概念非常清楚。通俗講,就是你pod中的應用提供了prometheus的監控功能,加上對應的注解,那么該應用的metrics會定時被采集走。
0x07 kubernetes-service-endpoints
對於服務的終端節點,也需要加注解:
- prometheus.io/scrape,為true則會將pod作為監控目標。
- prometheus.io/path,默認為/metrics
- prometheus.io/port , 端口
- prometheus.io/scheme 默認http,如果為了安全設置了https,此處需要改為https
這個基本上同上的。采集service-endpoints的metrics。
個人認為:如果某些部署應用只有pod沒有service,那么這種情況只能在pod上加注解,通過kubernetes-pods采集metrics。如果有service,那么就無需在pod加注解了,直接在service上加即可。畢竟service-endpoints最終也會落到pod上。
0x08 總結
配置項總結
- kubernetes-service-endpoints和kubernetes-pods采集應用中metrics,當然並不是所有的都提供了metrics接口。
- kubernetes-ingresses 和kubernetes-services 健康監測服務和ingress健康的狀態
- kubernetes-cadvisor 和 kubernetes-nodes,通過發現node,監控node 和容器的cpu等指標
自動發現源碼
參考client-go和prometheus自動發現k8s,這種監聽k8s集群中資源的變化,使用informer實現,不要輪詢kube-apiserver接口。
該配置文件需要部署一些組件來支持prometheus對k8s的監控,例如black-exporter。因為要自動發現,獲取集群的一些信息,所以也要做rbac的授權。具體參考:
github