雲原生監控系統Prometheus——Prometheus Query Language


Prometheus Query Language

  Prometheus 內置了自己的功能表達式查詢語言——PromQL(Prometheus Query Language)。它允許用戶實時選擇和匯聚時間序列數據,從而很方便地在 Prometheus 中查詢和檢索數據。表達式的結果可以在瀏覽器中展示位圖形,也可以展示位表格,或者由外部系統通過 HTTP API 的形式進行調用。雖然 PromQL 這個單詞以 QL 結尾,但是它並不是一種與 SQL 類似的語言,因為當涉及在時間序列上執行計算時,SQL 往往缺乏必要的表達能力。

  PromQL 的表現力非常強,除了支持常見的操作符外,還提供了大量的內置函數來實現對數據的高級處理,讓監控的數據會說話。日常數據查詢、可視化及告警配置這三大功能模塊都是依賴 PromQL 實現的。

  PromQL 是 Prometheus 實戰的核心,是 Prometheus 場景的基礎,也是 Prometheus 的必修課。

一、初識 PromQL

  我們先通過案例來看看 PromQL,感受下 PromQL 是如何讓用戶通過指標更好地了解系統的性能的。

  案例一:獲取當前主機可用的內存空間大小,單位 MB。

node_memory_free_bytes_total / (1024 * 1024)

  說明:node_memory_free_bytes_total 是瞬時向量表達式,返回的結果是瞬時向量。它可以用於獲取當前主機可用的內存大小,默認的樣本單位是 B,我們需要將單位換算為 MB。

  案例二:基於 2 小時的樣本數據,預測未來 24 小時內磁盤是否會滿。

if predict_linear(node_filesystem_free[2h],24*3600) < 0 

  說明:predict_linear(v range-vector,t scalar) 函數可以預測時間序列 v 在 t 秒后的值,它基於簡單線性回歸的方式,對時間窗口內的樣本數據進行統計,從而對時間序列的變化趨勢做出預測。上述命令就是根據文件系統過去2小時以內的空閑磁盤,去計算未來24小時磁盤空間是否會小魚0.如果用戶需要基於這個線性預測函數增加告警功能,也可以按如下方式擴展更新。

ALERT DiskWillFullIn24Houre
    IF predict_linear(node_filesystem_free[2h],24*3600)<0

  案例三:http_request_total(HTTP 請求總數)的 9 種常見 PromQL 語句。

# 1. 查詢 HTTP 請求總數。 http_requests_total # 2.查詢返回所有時間序列、指標 http_requests_total,以及給定 job 和 handler 的標簽 http_requests_total{job="apiserver",handle="/api/comments"} # 3.條件查詢:查詢狀態碼為 200 的請求總數。 http_requests_total{code="200"} # 4.區間查詢:查詢5分鍾內的請求總量 http_requests_total{}[5m] # 5.系統函數使用 # 查詢系統所有 HTTP 請求的總量 sum(http_requests_total) # 6.使用正則表達式,選擇名稱與特定模式匹配的作業(如以 server 結尾的作業)的時間序列 http_requests_total{job=~"."server"}
 # 7.過濾除了 4xx 之外所有 HTTP 狀態碼的數據 http_requests_total{status!~"4.."} # 8.子查詢,以1次/分鍾的速率采集最近30分鍾內的指標數據,然后返回這30分鍾內距離當前時間 # 最近的5分鍾內的采集結果 rate(http_requests_total[5m])[30m:1m] # 9.函數 rate,以1次/秒的速度采集最近5分鍾內的數據並將結果以時間序列的形式返回 rate(http_requests_total[5m])

  如上所述,我們僅針對 http_requests_total 這一個指標就做了9種不同的具有代表性的監控按理,可以看出 PromQL 語句是非常靈活的。

1.1  PromQL 的4種數據類型

  結合上述案例,我們看到了瞬時向量 Instant vector 和區間向量 Ranger vector,它們屬於 Prometheus 表達式語言的4種數據類型。

1、瞬時向量(Instant vector):一組時間序列,每個時間序列包含單個樣本,它們共享相同的時間戳。也就是說,表達式的返回值中只會包含該時間序列中的最新的一個樣本值。而相應的這樣的表達式稱之為瞬時向量表達式。

2、區間向量(Range vector):一組時間序列,每個時間序列包含一段時間范圍內的樣本數據。

3、標量(Scalar):一個浮點型的數據值。

4、字符串(String): 一個簡單的字符串值。

1.2 時間序列

  和MySQL 關系型數據庫不同的是,時間序列數據庫主要按照一定的時間間隔產生一個個數據點,而這些數據點按照時間戳和值的生成順序存放,這就得到了我們上問提到的向量(vector)。以時間軸為橫坐標、序列為縱坐標,這些數據點連接起來就會形成一個矩陣。

  矩陣中的每一個點都可稱為一個樣本(Sample),樣本主要由3方面構成。

    • 指標(Metrics):包括指標名稱(Metrics name)和一組標簽集(Label set)名稱,如 request_total{path="/status",method="GET"}。
    • 時間戳(TimeStamp):這個值默認精確到毫秒。
    • 樣本值(Value):這個值默認使用 Float64 浮點類型。

  Prometheus 會定期為所有系列收集新數據點。

1.3 指標

  時間序列的指標(Metrics)可以基於 Bigtable(Google 論文)設計為 Key-Value 存儲方式,如下圖所示

  上圖中的 http_requests_total{status="401",method="GET"}  @1434317560938   94358 為例,在 Key-value中,94358 作為 Value(也就是樣本值 Sample Value),前面的 http_requests_total{status="401",method="GET"}   @1434317560938 一律為 Key。在 Key 中,又由 Metrics Name(例子中的 http_requests_total)、Label(例子中的{status="401",method="GET"})和 Timestamp(例子中的 @1434317560938)3部分構成。

  在 Prometheus 的世界里,所有的數值都是 64 bit的。每條時間序列里面記錄的就是 64 bit Timestamp(時間戳)和 64 bit 的 Sample Value(采樣值)。

  如圖所示,Prometheus 的 Metrics 可以有兩種表現方式。第一種方式是經典的形式。

<Metric Name>{<Label name>=<label value>, ...}

  其中,Metric Name 就是指標名稱,反映監控樣本的含義。指標名稱只能由ASCII字符、數字、下划線以及冒號組成並必須符合正則表達式[a-zA-Z_:][a-zA-Z0-9_:]*。

  標簽反映了當前樣本的多種特征緯度。通過這些緯度,Prometheus 可以對樣本數據進行過濾、聚合、統計等操作,從而產生新的計算后的一條時間序列。標簽名稱也只能由ASCII字符、數字以及下划線組成,並且必須滿足正則表達式[a-zA-Z_][a-zA-Z0-9_]*。

  通過命令 go_gc_duration_seconds{quantile="0"} 可以在 Prometheus 的 Graph控制台獲得圖:

  第二種方式來源於 Prometheus 內部。

  (__name__=metrics.<label name>=<label value>, ...)

  第二種方式和第一種方式是一樣的,表示同一條時間序列。這種方式是 Prometheus 內部的表現形式,是系統保留的關鍵字,官方推薦只能在系統內部使用。在 Prometheus 的底層實現中,指標名稱實際上是以 __name__=<metric name> 的形式保存在數據庫中的;__name__ 是特定的標簽,代表了 Metric Name。標簽的值可以包含任何 Unicode 編碼的字符。

  通過命令 {__name__="go_gc_duration_seconds",quantile="0"} 可以在 Prometheus 的 Graph 控制台獲得如下結果:

二、PromQL中的4大選擇器

  如果一個指標來自多個不同類型的服務器或者應用,那么技術人員通常都有縮小范圍的需求,例如希望從不計其數的指標中查看來自一個實例 instance 或者 handler 標簽的指標。這時就要用標簽篩選功能了。這種標簽的篩選功能是通過選擇器(Selector)來完成的。

http_requests_total{job="Helloworld",status="200",method="POST",handler="/api/comments"}

  這就是一個選擇器,它返回的 job 是 HelloWorld,返回值是 200,方法是 POST(handler 標簽為 "/api/comments" 的 http_requests_total)。它是 HTTP 請求總數的瞬時向量選擇器(InstantVector Selector)。

  例子中的 job="HelloWorld"是一個匹配器(Matcher),一個選擇器中可以有多個匹配器,它們組合在一起使用。

  接下來就從匹配器(Matcher)、瞬時向量選擇器(Instant Vector Selector)、區間向量選擇器(Range Vector Selector)和偏移量修改器(Offset)這4個方面對 PromQL 進行介紹。

2.1 匹配器

  匹配器是作用於標簽上的,標簽匹配器可以對時間序列進行過濾,Prometheus 自持完全匹配和正則匹配兩種模式。

2.1.1. 相等匹配器(=)

  相等匹配器(Equality Matcher),用於選擇與提供的字符串完全相同的標簽。下面介紹的例子中就會使用相等匹配器按照條件進行一系列過濾。

http_requests_total{job="Helloworld",status="200",method="POST",handler="/api/comments"}

  需要注意的是,如果標簽為空或者不存在,那么也可以使用 Label="" 的形式。對於不存在的標簽,比如 demo 標簽,go_gc_duration_seconds_count 和 go_gc_duration_seconds_count{demo=""} 效果是一樣的,對比如下:

2.1.2. 不相等匹配器(!=)

  不相等匹配器(Negative Equality Matcher),用於選擇與提供的字符串不相等的標簽。它和相等匹配器是完全性相反的。舉個例子,如果想要查看 job 並不是 HelloWorld 的 HTTP 請求總數,可以使用如下不相等匹配器。

http_requests_total{job!="HelloWorld"}

2.1.3. 正則表達式匹配器(=~)

  正則表達式匹配器(Regular Expression Matcher),用於選擇與提供的字符串進行正則運算后所得結果相匹配的標簽。Prometheus 的正則運算是強指定的,比如正則表達式 a 只會匹配到字符串 a,而並不會匹配到 ab 或者 ba 或者 abc。如果你不想使用這樣的強制指定功能。可以在正則表達式的前面或者后面加上 ".*"。比如下面的例子表示 job 是所有以 Hello 開頭的 HTTP 請求總數。

http_requests_total{job=~"Hello.*"}

  http_requests_total 直接等效於 {__name__="http_requests_total"},后者也可以使用和前者一樣的4種匹配器(=,!=,=~,!=)。比如下面的例子表示 job 是所有以 Hello 開頭的指標。

{__name_-=~"Hello.*"}

  如果想要查看 job 是以 Hello 開頭的,且在生產(prod)、測試(test)、預發布(pre)等環境下響應結果不是 200 的 HTTP 請求總數,可以使用這樣的方式進行查詢。

http_requests_total{job="Hello.*",env=~"prod|test|pre",code!="200"}

  由於所有的 PromQL 表達式必須至少包含一個指標名稱,或者至少有一個不會匹配到空字符串的標簽過濾器,因此結合 Prometheus 官方文檔,可以梳理出如下非法實例:

{job=~".*"}    #非法!
{job=""}       #非法!
{job!=""}      #非法!

  相反,如下表達式是合法的:

{job=~".+"}                #合法! .+ 表示至少一個字符
{job=~".*",method="get"}   #合法! .* 表示任意一個字符
{job=~"",method="post"}    #合法! 存在一個非空匹配
{job=~".+",method="post"} #合法! 存在一個非空匹配

2.1.4. 正則表達式相反匹配器(!~)

  正則表達式相反匹配器(Negative Regular Expression Matcher),用於選擇與提供的字符串進行正則運算后所得結果不匹配的標簽。因為 PromQL 的正則表達式基於 RE2 的語法,但是 RE2 不支持向前不匹配表達式,所以 !~ 的出現是一種替代方案,以實現基於正則表達式排除指定標簽值的功能。在一個選擇器當中,可以針對同一個標簽來使用多個匹配器。比如下面的例子,可以實現查找 job 名是 node 且安裝在 /prometheus 目錄下,但是並不在 /prometheus/user 目錄下的所有文件系統並確定其大小。

node_filesystem_size_bytes{job="node",mountpoint=~"/prometheus/.*",mountpoint !~ "/prometheus/user/.*"}

  PromQL 采用的是 RE2 引擎,支持正則表達式。RE2 來源於 Go 語言,它被設計為一種線性時間模式,非常適合用於 PromQL 這種時間序列的方法。但是就像我們前文描述的 RE2 那樣,其不支持向前不匹配表達式(向前斷言),也不支持反向引用,同時還缺失很多高級特性。


 知識延伸:

  =、!=、=~、!~ 這4個匹配器在實戰中非常有用,但是如果頻繁為標簽施加正則匹配器,比如 HTTP 狀態碼有 1xx、2xx、3xx、4xx、5xx,在統計所有返回值是 5xx 的 HTTP 請求時,PromQL 語句就會變成 http_requests_total{job="HelloWorld",status=~"500",status=~"501",status=~"502",status=~"503",status=~"504",status=~"505"……}

  但是,我們都知道 5xx 代表服務器錯誤,這些狀態表示服務器在嘗試處理請求時發生了內部錯誤。這些錯誤可能來自服務器本身,而不是請求。

  500 (服務器內部錯誤) 服務器遇到錯誤,無法完成請求。
  501 (尚未實施) 服務器不具備完成請求的功能。 例如,服務器無法識別請求方法時可能會返回此代碼。
  502 (錯誤網關) 服務器作為網關或代理,從上游服務器收到無效響應。
  503 (服務不可用) 服務器目前無法使用(由於超載或停機維護)。 通常,這只是暫時狀態。
  504 (網關超時) 服務器作為網關或代理,但是沒有及時從上游服務器收到請求。
  505 (HTTP 版本不受支持) 服務器不支持請求中所用的 HTTP 協議版本。

    ……

  為了消化這樣的錯誤,可以進行如下優化:

  優化一:多個表達式之間使用 "|" 進行分割:http_requests_total{job="HelloWorld",status=~"500|501|502|503|504|505"}。

  優化二:將這些返回值包裝為 5xx,這樣就可以直接使用正則表達式匹配器對 http_requests_total{job="HelloWorld",status=~"5xx"}進行優化。

  優化三:如果要選擇不以 4xx 開頭的所有 HTTP 狀態碼,可以使用 http_requests_total{status!~"4.."}。

2.2 瞬時向量選擇器

  瞬時向量選擇器用於返回在指定時間戳之前查詢到的最新樣本的瞬時向量,也就是包含 0 個或者多個時間序列的列表。在最簡單的形式中,可以僅指定指標的名稱,如 http_requests_total,這將生成包含此指標名稱的所有時間序列的元素的瞬時向量。我們可以通過大括號 {} 中添加一組匹配的標簽來進一步過濾這些時間序列,如:

http_requests_total{job="HelloWorld",group="middlueware"}
http_requests_total{}    選擇當前最新的數據

  瞬時向量並不希望獲取過時的數據,這里需要注意的是,在 Prometheus 1.x 和 2.x 版本中是有區別的。

  在 Prometheus 1.x 中會返回在查詢時間之間不超過 5 分鍾的時間序列,這種方法還是能滿足大多數場景的需求的。但是如果在第一次查詢,如 http_requests_total{job="HellWorld"} 這個5分鍾的時間窗口內增加一個 label,如 http_requests_total{job="HelloWorld",group="middleware"},之后再重新進行一次瞬時查詢,那么就會重復計費。這是一個問題。

  在 Prometheus 2.x 是這么處理上述問題的:它會像汽車雨刮器一樣刮擦,如果一個時間序列從一個刮擦到另一個,或者 Prometheus 的服務發現不再能找到當前 target,陳舊的標記就會被添加到時間序列中。這時使用瞬時向量過濾器,除需要找到滿足匹配條件的時間序列之外,還需要考慮查詢求值時間之前 5 分鍾內的最新樣本。如果樣本是正常樣本,那么它將在瞬時向量中返回;但如果是過期的標記,那么該時間序列將不出現在瞬時向量中。需要注意的是,如果你使用了 Prometheus Export 來暴露時間戳,那么過期的標記和 Prometheus 2.x 對過期標記的處理邏輯就會失效,受影響的時間序列會繼續和 5 分鍾以前的舊邏輯一起執行。

2.3 區間向量選擇器

  區間向量選擇器返回一組時間序列,每個時間序列包含一段時間范圍內的樣本數據。和瞬時向量選擇器不同的是,它從當前時間向前選擇了一定時間范圍的樣本。區間向量選擇器主要在選擇器末尾的方括號 [] 中,通過時間范圍選擇器進行定義,以指定每個返回的區間向量樣本值中提取多長的時間范圍。例如,下面的例子可以表示最近5分鍾內的所有HTTP請求的樣本數據,其中[5m]將瞬時向量選擇器轉變為區間向量選擇器。

http_requests_total{}[5m]

  時間范圍通過整數來表示,可以使用以下單位之一:秒(s)、分鍾(m)、小時(h)、天(d)、周(w)、年(y)。需要強調的是,必須用整數來表示時間,比如 38m 是正確的,但是 2h 15m 和 1.5h 都是錯誤的。注意,這里的年是忽略閏年的,永遠是50*60*25*365 秒。

  關於區間向量選擇還需要補充的就是,它返回的是一定范圍內所有的樣本數據,雖然刮擦時間是相同的,但是多個時間序列的時間戳往往並不會對齊,如下所示:

http_requests_total{code="200",job="HelloWorld",method="get"}=[
1@1518096812.678
1@1518096817.678
1@1518096822.678
1@1518096827.678
1@1518096832.678
1@1518096837.678
]
http_requests_total{code="200",job="HelloWorld",method="get"}=[
4@1518096813.233
4@1518096818.233
4@1518096823.233
4@1518096828.233
4@1518096833.233
4@1518096838.233
]

  這是因為距離向量會保留樣本的原始時間戳,不同 target 的刮擦被分布以均勻的負載,所以雖然我們可以控制刮擦和規則評估的頻率,比如 5秒/次(第一組 12、17、22、27、32、37;第二組 13、18、23、28、33、28),但是我們無法控制他們完全對齊時間戳(1@1518096812.678和4@1518096813.233),因為假如有成百上千的 target,每次5秒的刮擦都會導致這些 target 在不同的位置被處理,所以時間序列一定會存在略微不同的時間點的。但是這在實際生產中並不是非常重要(偶發的不對系統造成影響的瞬時毛刺數據不是很重要),因為 Prometheus 等指標監控本身的定位就不像 Log 監控那樣精准,而是趨勢准確。

  最后,我們結合本節介紹的知識,來看幾個關於 CPU 和 PromQL 實戰案例,夯實下理論。

  案例一:計算 2 分鍾內系統進程的 CPU 使用率。

  rate是PromQL內置函數,獲取一段時間窗口的平均量。取一段時間增量的平均每秒數量,2m內總增量/2m

rate(node_cpu_seconds_total{}[2m])

  案例二:計算系統 CPU 的總體使用率,通過排除系統閑置的 CPU 使用率即可獲得(without用於從計算結果中移除列舉的標簽,而保留其它標簽)。

  without 用於從計算結果中移除列舉的標簽,而保留其它標簽。by則正好相反,結果向量中只保留列出的標簽,其余標簽則移除。通過without和by可以按照樣本的問題對數據進行聚合。

  avg without 不按cpu標簽分組,然后計算平均值。

1 - avg without(cpu) (rate(node_cpu_seconds_total{mode="idle"}[2m]))

  案例三:node_cpu_seconds_total 可以獲取當前 CPU 的所有信息。使用 avg 聚合查詢到數據后,再使用 by 來區分實例,這樣就能做到分實例查詢各自的數據。

  irate(5m):指定時間范圍內的最近兩個數據點來算速率,適合快速變化的計數器(counter)。

avg(irate(node_cpu_seconds_total{job="node-exporter"}[5m])) by (instance)


知識延伸:

1)區間向量選擇器往往和速率函數 rate 一起使用。比如子查詢,以 1次/分鍾的速率采集關於 http_requests_total 指標在過去30分鍾的數據,然后返回這30分鍾內距離當前最近的5分鍾內的采集結果,示例如下:

rate(http_requests_total{}[5m])[30m:1m]

2)一個區間向量表達式不能直接展示在 Graph 中,但是可以展示在 Console 視圖中。

2.4 偏移量修改器

  偏移量修改器可以讓瞬時向量選擇器和區間向量選擇器發生偏移,它允許獲取查詢計算時間並在每個選擇器的基礎上將其向前推移。

  瞬時向量選擇器和區間向量選擇器都可以獲取當前時間基准下的樣本數據,如果我們要獲取查詢計算時間前5分鍾的 HTTP 請求情況,可以使用下面這樣的方式:

http_requests_total{} offset 5m

  偏移向量修改器的關鍵字必須緊跟在選擇器{}后面,

sum(http_requests_total{method="GET"} offset 5m)   #正確
sum(http_requests_total{method="GET"}) offset 5m   #錯誤

  區間向量修改器的關鍵字也必須跟在選擇器{}后面,

sum(http_requests_total[5m] offset 5m)   #正確
sum(http_requests_total[5m]) offset 5m   #錯誤

  偏移量修改器一般用於單條數據調試比較有幫助。而趨勢變化數據中,用它較少。

三、Prometheus的4大指標類型

  Prometheus有4大指標類型(Metrics Type),分別是 Counter(計數器)、Gauge(儀表盤)、Histogram(直方圖)和 Summary(摘要)。這是在 Prometheus 客戶端(目前主要有 Go、Java、Python、Ruby 等語言版本)中提供的4種核心指標類型,但是 Prometheus 的服務端並不區分指標類型,而是簡單地把這些指標統一視為無類型的時間序列。

3.1 計數器

  計數器類型代表一種樣本數據單調遞增的指標,在沒有發生重置(如服務重啟、應用重啟)的情況下只增不減,其樣本值應該是不斷增大的。例如,可以使用 Counter 類型的指標來表示服務的請求數、已完成的任務數、錯誤發生的次數等。計數器指標主要有兩個應用方法:

1) Inc()      //將 Counter 值加 1
2) Add(float64) //將指定值加到 Counter 值上,如果指定值小於0,會產生 Go語言的Panic異常,進而可能導致崩潰

  但是,計數器計算的總數對用戶來說大多沒有什么用。大家千萬不要用於計數器類型用於計算當前運行的進程數量、當前登錄的用戶數量等。

  為了能夠更直觀地表示樣本數據的變化情況,往往需要計算樣本的增長速率,這時候通常使用 PromQL 的rate、topk、increase 和 irate 等函數

rate(http_requests_total[5m]) //通過 rate() 函數獲取 HTTP 請求量的增長速率
topk(10,http_requests_total)  //通過當前系統中訪問量排在前10的 HTTP 地址

知識延伸:

  Prometheus 要先執行 rate() 再執行 sum(),不能執行完 sum() 再執行 rate()。

  這背后與 rate() 的實現方式有關,rate()在設計上假定對應的指標應該是一個計數器,也就是只有incr(增加)和 reset(歸零)兩種方式。而執行了sum()或其他聚合操作之后,得到的就不再是一個計數器了。


  increase(v range-vector)函數傳遞的參數是一個區間向量,increase 函數獲取區間向量中的第一個和最后一個樣本並返回其增長量。下面的例子可以查詢 Counter 類型指標的增長速率,可以獲取 http_requests_total 在最近 5 分鍾內的平均樣本,其中 300 代表 300 秒。

increase(http_requests_total[5m]) / 300

 知識延伸:

  rate 和 increase 函數計算的增長量容易陷入長尾效應中。比如在某一個由於訪問量或者其他問題導致 CPU 占用 100% 的情況中,通過計算在時間窗口內的平均增長率是無法反應出該問題的。

  為什么監控和性能測試中,我們更關注 p95/p99位?就是因為長尾效應。由於個別請求的響應時間需要1秒或者更久,傳統的響應時間的平均值就體現不出響應時間中的尖刺,去尖刺也是數據采集中一個很重要的工序,這就是所謂的長尾效應。p95/p99就是長尾效應的分割線,如表示99%的請求在 xxx 范圍內,或者是1%的請求在 xxx 范圍之外。99%是一個范圍,意思是99%的請求在某一延遲內,剩下的1%就在延遲之外了。


  irate(v range-vector) 是 PromQL 針對長尾效應專門提供的靈敏度更高的函數。irate 同樣用於計算區間向量的增長速率,但是其反映出的瞬時增長速率。irate 函數是通過區間向量中的最后兩個樣本數據來計算區間向量的增長速率的。這種方式可以避免在時間窗口范圍內的"長尾問題",並且體現出更好的靈敏度。通過 irate 函數繪制的圖標能夠更好地反映樣本數據的瞬時變化狀態。irate 的調用命令如下所示:

irate(http_requests_total[5m])

知識延伸:

  irate 函數相比於 rate 函數提供了更高的靈敏度,不過分析長期趨勢時或者在告警規則中,irate 的這種靈敏度反而容易造成干擾。因此,在長期趨勢分析或者告警鍾更推薦 rate 函數。


 3.2 儀表盤

  儀表盤類型代表一種樣本數據可以任意變化的指標,即可增可減。它可以理解為狀態的快照,Gauge 通常用於表示溫度或者內存使用率這種指標數據,也可以表示隨時增加或減少的 “總數”,例如當前並發請求的數量 node_memory_MemFee(主機但錢空閑的內容大小)、node_memory_MemAvailable(可用內存大小)等。在使用 Gauge 時,用戶往往希望使用它們求和、取平均值、最小值、最大值等。

  以 Prometheus 經典的 Node Exporter 的指標 node_filesystem_size_bytes 為例,它可以報告從 node_filesystem_size_bytes 采集來的文件系統大小,包含 device、fstype 和 mountpoint 等標簽。如果想要對每一台機器上的總文件系統大小求和(sum),可以使用如下 PromQL語句:

sum without(device,fstype,mountpoint)(node_filesystem_size_bytes)

  除了求和、求最大值等,利用 Gauge 的函數求最小值和平均值原理也是類似的。除了基本的操作外,Gauge 經常結合 PromQL 的 predict_linear 和 data 函數使用。

3.3 直方圖

  在大多數情況下,人們都傾向於使用某些量化指標的平均值,例如 CPU 的平均使用率、頁面的平均響應時間。用這種方式呈現結果很明顯,以系統 API 調用的平均響應時間為例,如果大多數 API 請求維持在 100ms 的響應方位內,而個別請求的響應時間需要 5s,就表示出現了長尾問題。

  響應慢可能是平均值大導致的,也可能是長尾效應導致的額,區分兩者的最簡單的方式就是按照請求延遲的范圍進行分區。例如,統計延遲在0~10ms之間的請求數有多少,在10~20ms之間的請求數有多少。通過 Histogram 展示監控指標,我們可以快速了解監控樣本的分布情況。

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 5.1373e-05
go_gc_duration_seconds{quantile="0.25"} 9.5224e-05
go_gc_duration_seconds{quantile="0.5"} 0.000133418
go_gc_duration_seconds{quantile="0.75"} 0.000273065
go_gc_duration_seconds{quantile="1"} 1.565256115
go_gc_duration_seconds_sum 2.600561302
go_gc_duration_seconds_count 473
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 269
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.2"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.00471752e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 8.9008069072e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.190072e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.138419718e+09
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.0005366479170588193
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.8316152e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.00471752e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.73113088e+08
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.25197824e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 2.138627e+06
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 1.33824512e+08
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 5.98310912e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6262509307495074e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.140558345e+09
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 9600
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 3.778488e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 6.504448e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.16926496e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 2.062552e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 5.668864e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 5.668864e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 6.46069384e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 14
# HELP net_conntrack_dialer_conn_attempted_total Total number of connections attempted by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_attempted_total counter
net_conntrack_dialer_conn_attempted_total{dialer_name="alertmanager"} 69
net_conntrack_dialer_conn_attempted_total{dialer_name="default"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 44
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 12
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 17
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 31
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 4
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 8
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 33
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 30
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 28
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 59
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 9
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 14
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 15
# HELP net_conntrack_dialer_conn_closed_total Total number of connections closed which originated from the dialer of a given name.
# TYPE net_conntrack_dialer_conn_closed_total counter
net_conntrack_dialer_conn_closed_total{dialer_name="alertmanager"} 30
net_conntrack_dialer_conn_closed_total{dialer_name="default"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 20
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 5
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 6
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 16
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 3
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 6
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 18
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 17
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 14
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 27
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 7
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 7
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 8
# HELP net_conntrack_dialer_conn_established_total Total number of connections successfully established by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_established_total counter
net_conntrack_dialer_conn_established_total{dialer_name="alertmanager"} 33
net_conntrack_dialer_conn_established_total{dialer_name="default"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 23
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 6
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 7
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 19
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 4
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 7
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 21
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 20
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 17
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 30
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 9
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 9
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 9
# HELP net_conntrack_dialer_conn_failed_total Total number of connections failed to dial by the dialer a given name.
# TYPE net_conntrack_dialer_conn_failed_total counter
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="refused"} 3
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="timeout"} 33
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="unknown"} 36
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="refused"} 3
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="timeout"} 18
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="unknown"} 21
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="timeout"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="unknown"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="unknown"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="timeout"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="unknown"} 12
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="refused"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="unknown"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="timeout"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="unknown"} 12
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="unknown"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="unknown"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="refused"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="timeout"} 26
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="unknown"} 29
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="timeout"} 5
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="unknown"} 5
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="timeout"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="unknown"} 6
# HELP net_conntrack_listener_conn_accepted_total Total number of connections opened to the listener of a given name.
# TYPE net_conntrack_listener_conn_accepted_total counter
net_conntrack_listener_conn_accepted_total{listener_name="http"} 4231
# HELP net_conntrack_listener_conn_closed_total Total number of connections closed that were made to the listener of a given name.
# TYPE net_conntrack_listener_conn_closed_total counter
net_conntrack_listener_conn_closed_total{listener_name="http"} 4227
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 934.63
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 61
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 6.1779968e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.6262298738e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.802285056e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP prometheus_api_remote_read_queries The current number of remote read queries being executed or waiting.
# TYPE prometheus_api_remote_read_queries gauge
prometheus_api_remote_read_queries 0
# HELP prometheus_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which prometheus was built.
# TYPE prometheus_build_info gauge
prometheus_build_info{branch="HEAD",goversion="go1.16.2",revision="3cafc58827d1ebd1a67749f88be4218f0bab3d8d",version="2.26.0"} 1
# HELP prometheus_config_last_reload_success_timestamp_seconds Timestamp of the last successful configuration reload.
# TYPE prometheus_config_last_reload_success_timestamp_seconds gauge
prometheus_config_last_reload_success_timestamp_seconds 1.6262257000479589e+09
# HELP prometheus_config_last_reload_successful Whether the last configuration reload attempt was successful.
# TYPE prometheus_config_last_reload_successful gauge
prometheus_config_last_reload_successful 1
# HELP prometheus_engine_queries The current number of queries being executed or waiting.
# TYPE prometheus_engine_queries gauge
prometheus_engine_queries 0
# HELP prometheus_engine_queries_concurrent_max The max number of concurrent queries.
# TYPE prometheus_engine_queries_concurrent_max gauge
prometheus_engine_queries_concurrent_max 20
# HELP prometheus_engine_query_duration_seconds Query timings
# TYPE prometheus_engine_query_duration_seconds summary
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.5"} 0.000172349
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.9"} 0.006378077
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.99"} 0.092900003
prometheus_engine_query_duration_seconds_sum{slice="inner_eval"} 508.9111094559962
prometheus_engine_query_duration_seconds_count{slice="inner_eval"} 122911
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.5"} 8.8421e-05
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.9"} 0.001274996
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.99"} 0.005844206
prometheus_engine_query_duration_seconds_sum{slice="prepare_time"} 142.2880246389999
prometheus_engine_query_duration_seconds_count{slice="prepare_time"} 122911
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.5"} 4.857e-06
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.9"} 1.4419e-05
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.99"} 5.3215e-05
prometheus_engine_query_duration_seconds_sum{slice="queue_time"} 20.567446440999838
prometheus_engine_query_duration_seconds_count{slice="queue_time"} 122911
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.5"} NaN
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.9"} NaN
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.99"} NaN
prometheus_engine_query_duration_seconds_sum{slice="result_sort"} 0.000177348
prometheus_engine_query_duration_seconds_count{slice="result_sort"} 35
# HELP prometheus_engine_query_log_enabled State of the query log.
# TYPE prometheus_engine_query_log_enabled gauge
prometheus_engine_query_log_enabled 0
# HELP prometheus_engine_query_log_failures_total The number of query log failures.
# TYPE prometheus_engine_query_log_failures_total counter
prometheus_engine_query_log_failures_total 0
# HELP prometheus_http_request_duration_seconds Histogram of latencies for HTTP requests.
# TYPE prometheus_http_request_duration_seconds histogram
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05
prometheus_http_request_duration_seconds_count{handler="/"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.1"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.2"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.4"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="1"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="3"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="8"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="20"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="60"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="120"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="+Inf"} 4205
prometheus_http_request_duration_seconds_sum{handler="/-/ready"} 0.044763871999999934
prometheus_http_request_duration_seconds_count{handler="/-/ready"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.1"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.2"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.4"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="1"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="3"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="8"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/-/reload"} 12.747278755
prometheus_http_request_duration_seconds_count{handler="/-/reload"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.1"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.2"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.4"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="1"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="3"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="8"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="20"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="60"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="120"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="+Inf"} 7
prometheus_http_request_duration_seconds_sum{handler="/api/v1/label/:name/values"} 0.056257193999999996
prometheus_http_request_duration_seconds_count{handler="/api/v1/label/:name/values"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.1"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.2"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.4"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="1"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="3"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="8"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="20"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="60"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="120"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="+Inf"} 76
prometheus_http_request_duration_seconds_sum{handler="/api/v1/query"} 0.12029475700000002
prometheus_http_request_duration_seconds_count{handler="/api/v1/query"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.1"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.2"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.4"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="1"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="3"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="8"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="20"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="60"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="120"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="+Inf"} 38
prometheus_http_request_duration_seconds_sum{handler="/api/v1/query_range"} 0.06583801699999998
prometheus_http_request_duration_seconds_count{handler="/api/v1/query_range"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/api/v1/targets"} 0.006790512
prometheus_http_request_duration_seconds_count{handler="/api/v1/targets"} 1
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.1"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.2"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.4"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="1"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="3"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="8"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="20"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="60"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="120"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="+Inf"} 4
prometheus_http_request_duration_seconds_sum{handler="/favicon.ico"} 0.003068569
prometheus_http_request_duration_seconds_count{handler="/favicon.ico"} 4
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.1"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.2"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.4"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="1"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="3"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="8"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="20"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="60"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="120"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="+Inf"} 6
prometheus_http_request_duration_seconds_sum{handler="/graph"} 0.001303871
prometheus_http_request_duration_seconds_count{handler="/graph"} 6
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.1"} 1395
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.2"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.4"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="1"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="3"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="8"} 1397
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="20"} 1397
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="60"} 1398
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="120"} 1398
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="+Inf"} 1398
prometheus_http_request_duration_seconds_sum{handler="/metrics"} 41.05895542500007
prometheus_http_request_duration_seconds_count{handler="/metrics"} 1398
# HELP prometheus_http_requests_total Counter of HTTP requests.
# TYPE prometheus_http_requests_total counter
prometheus_http_requests_total{code="200",handler="/-/ready"} 4202
prometheus_http_requests_total{code="200",handler="/-/reload"} 1
prometheus_http_requests_total{code="200",handler="/api/v1/label/:name/values"} 7
prometheus_http_requests_total{code="200",handler="/api/v1/query"} 73
prometheus_http_requests_total{code="200",handler="/api/v1/query_range"} 35
prometheus_http_requests_total{code="200",handler="/api/v1/targets"} 1
prometheus_http_requests_total{code="200",handler="/favicon.ico"} 4
prometheus_http_requests_total{code="200",handler="/graph"} 6
prometheus_http_requests_total{code="200",handler="/metrics"} 1398
prometheus_http_requests_total{code="302",handler="/"} 1
prometheus_http_requests_total{code="400",handler="/api/v1/query"} 3
prometheus_http_requests_total{code="400",handler="/api/v1/query_range"} 3
prometheus_http_requests_total{code="503",handler="/-/ready"} 3
# HELP prometheus_http_response_size_bytes Histogram of response size for HTTP requests.
# TYPE prometheus_http_response_size_bytes histogram
prometheus_http_response_size_bytes_bucket{handler="/",le="100"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/"} 29
prometheus_http_response_size_bytes_count{handler="/"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="100"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="10000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="100000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+06"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+07"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+08"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+09"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="+Inf"} 4205
prometheus_http_response_size_bytes_sum{handler="/-/ready"} 88299
prometheus_http_response_size_bytes_count{handler="/-/ready"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="100"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/-/reload"} 0
prometheus_http_response_size_bytes_count{handler="/-/reload"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="10000"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="100000"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+06"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+07"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+08"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+09"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="+Inf"} 7
prometheus_http_response_size_bytes_sum{handler="/api/v1/label/:name/values"} 49810
prometheus_http_response_size_bytes_count{handler="/api/v1/label/:name/values"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="100"} 21
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1000"} 69
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="10000"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="100000"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+06"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+07"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+08"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+09"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="+Inf"} 76
prometheus_http_response_size_bytes_sum{handler="/api/v1/query"} 31427
prometheus_http_response_size_bytes_count{handler="/api/v1/query"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="100"} 31
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1000"} 35
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="10000"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="100000"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+06"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+07"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+08"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+09"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="+Inf"} 38
prometheus_http_response_size_bytes_sum{handler="/api/v1/query_range"} 14573
prometheus_http_response_size_bytes_count{handler="/api/v1/query_range"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/api/v1/targets"} 5249
prometheus_http_response_size_bytes_count{handler="/api/v1/targets"} 1
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="10000"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="100000"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+06"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+07"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+08"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+09"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="+Inf"} 4
prometheus_http_response_size_bytes_sum{handler="/favicon.ico"} 60344
prometheus_http_response_size_bytes_count{handler="/favicon.ico"} 4
prometheus_http_response_size_bytes_bucket{handler="/graph",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/graph",le="10000"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="100000"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+06"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+07"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+08"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+09"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="+Inf"} 6
prometheus_http_response_size_bytes_sum{handler="/graph"} 13818
prometheus_http_response_size_bytes_count{handler="/graph"} 6
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="100000"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+06"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+07"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+08"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+09"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="+Inf"} 1398
prometheus_http_response_size_bytes_sum{handler="/metrics"} 1.8615e+07
prometheus_http_response_size_bytes_count{handler="/metrics"} 1398
# HELP prometheus_notifications_alertmanagers_discovered The number of alertmanagers discovered and active.
# TYPE prometheus_notifications_alertmanagers_discovered gauge
prometheus_notifications_alertmanagers_discovered 3
# HELP prometheus_notifications_dropped_total Total number of alerts dropped due to errors when sending to Alertmanager.
# TYPE prometheus_notifications_dropped_total counter
prometheus_notifications_dropped_total 33
# HELP prometheus_notifications_errors_total Total number of errors sending alert notifications.
# TYPE prometheus_notifications_errors_total counter
prometheus_notifications_errors_total{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 23
prometheus_notifications_errors_total{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 19
prometheus_notifications_errors_total{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 20
# HELP prometheus_notifications_latency_seconds Latency quantiles for sending alert notifications.
# TYPE prometheus_notifications_latency_seconds summary
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.5"} 0.001683286
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.9"} 0.002870164
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.99"} 0.00966798
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 284.101079177001
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 1547
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.5"} 0.001656781
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.9"} 0.002943216
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.99"} 0.010048782
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 257.0053810620001
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 1547
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.5"} 0.001654836
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.9"} 0.002892869
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.99"} 0.010021074
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 259.53750336499985
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 1547
# HELP prometheus_notifications_queue_capacity The capacity of the alert notifications queue.
# TYPE prometheus_notifications_queue_capacity gauge
prometheus_notifications_queue_capacity 10000
# HELP prometheus_notifications_queue_length The number of alert notifications in the queue.
# TYPE prometheus_notifications_queue_length gauge
prometheus_notifications_queue_length 0
# HELP prometheus_notifications_sent_total Total number of alerts sent.
# TYPE prometheus_notifications_sent_total counter
prometheus_notifications_sent_total{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 3286
prometheus_notifications_sent_total{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 3286
prometheus_notifications_sent_total{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 3286
# HELP prometheus_remote_storage_highest_timestamp_in_seconds Highest timestamp that has come into the remote storage via the Appender interface, in seconds since epoch.
# TYPE prometheus_remote_storage_highest_timestamp_in_seconds gauge
prometheus_remote_storage_highest_timestamp_in_seconds 1.626250946e+09
# HELP prometheus_remote_storage_samples_in_total Samples in to remote storage, compare to samples out for queue managers.
# TYPE prometheus_remote_storage_samples_in_total counter
prometheus_remote_storage_samples_in_total 4.855752e+07
# HELP prometheus_remote_storage_string_interner_zero_reference_releases_total The number of times release has been called for strings that are not interned.
# TYPE prometheus_remote_storage_string_interner_zero_reference_releases_total counter
prometheus_remote_storage_string_interner_zero_reference_releases_total 0
# HELP prometheus_rule_evaluation_duration_seconds The duration for a rule to execute.
# TYPE prometheus_rule_evaluation_duration_seconds summary
prometheus_rule_evaluation_duration_seconds{quantile="0.5"} 0.000405493
prometheus_rule_evaluation_duration_seconds{quantile="0.9"} 0.008879674
prometheus_rule_evaluation_duration_seconds{quantile="0.99"} 0.096290033
prometheus_rule_evaluation_duration_seconds_sum 1056.0493569379776
prometheus_rule_evaluation_duration_seconds_count 122803
# HELP prometheus_rule_evaluation_failures_total The total number of rule evaluation failures.
# TYPE prometheus_rule_evaluation_failures_total counter
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0
# HELP prometheus_rule_evaluations_total The total number of rule evaluations.
# TYPE prometheus_rule_evaluations_total counter
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 5608
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 4212
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 701
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 7020
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 3510
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 2804
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 14742
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 6309
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 2106
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 10530
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 5608
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 2106
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 1402
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 4212
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 702
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 9126
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 701
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 2103
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 11232
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 7711
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 11232
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 4914
# HELP prometheus_rule_group_duration_seconds The duration of rule group evaluations.
# TYPE prometheus_rule_group_duration_seconds summary
prometheus_rule_group_duration_seconds{quantile="0.01"} 0.000344771
prometheus_rule_group_duration_seconds{quantile="0.05"} 0.000446823
prometheus_rule_group_duration_seconds{quantile="0.5"} 0.002459279
prometheus_rule_group_duration_seconds{quantile="0.9"} 0.016124292
prometheus_rule_group_duration_seconds{quantile="0.99"} 0.781796405
prometheus_rule_group_duration_seconds_sum 1066.3853881880052
prometheus_rule_group_duration_seconds_count 16956
# HELP prometheus_rule_group_interval_seconds The interval of a rule group.
# TYPE prometheus_rule_group_interval_seconds gauge
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 180
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 30
# HELP prometheus_rule_group_iterations_missed_total The total number of rule group evaluations missed due to slow rule group evaluation.
# TYPE prometheus_rule_group_iterations_missed_total counter
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 23
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 139
# HELP prometheus_rule_group_iterations_total The total number of scheduled rule group evaluations, whether executed or missed.
# TYPE prometheus_rule_group_iterations_total counter
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 140
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 841
# HELP prometheus_rule_group_last_duration_seconds The duration of the last rule group evaluation.
# TYPE prometheus_rule_group_last_duration_seconds gauge
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0.002209278
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 0.005564863
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 0.0008587
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 0.008148938
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0.000951374
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0.001208625
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 0.014632362
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 0.199707258
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 0.001216636
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 0.715759288
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0.000844075
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 0.001544574
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0.00338814
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 0.00318991
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0.002515068
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0.002272828
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 0.01248441
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 0.000350055
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0.004622683
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 0.000407063
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 0.003120466
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 0.017802478
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 0.004361154
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0.003392752
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0.001903696
# HELP prometheus_rule_group_last_evaluation_samples The number of samples returned during the last rule group evaluation.
# TYPE prometheus_rule_group_last_evaluation_samples gauge
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 13
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 13
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 423
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 59
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 457
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 9
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 16
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 36
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 6
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 57
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0
# HELP prometheus_rule_group_last_evaluation_timestamp_seconds The timestamp of the last rule group evaluation in seconds.
# TYPE prometheus_rule_group_last_evaluation_timestamp_seconds gauge
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 1.626250918174826e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 1.6262509390158134e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 1.6262509428751833e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 1.6262509336427326e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 1.6262509204856255e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 1.6262509298138967e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 1.626250937404332e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 1.6262508789433932e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 1.626250920584692e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 1.626250930246908e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 1.6262509245451539e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 1.6262509473252597e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 1.6262509436568923e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 1.6262509189774363e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 1.6262509344838426e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 1.6262509236114223e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 1.626250928798736e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 1.6262509335774076e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 1.6262509447650206e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 1.6262509202492197e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 1.6262509268543773e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 1.6262509465485084e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 1.6262509234589508e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 1.6262509281589828e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 1.626250931304166e+09
# HELP prometheus_rule_group_rules The number of rules.
# TYPE prometheus_rule_group_rules gauge
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 8
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 6
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 10
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 30
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 4
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 21
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 9
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 15
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 8
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 6
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 13
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 16
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 11
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 16
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 7
# HELP prometheus_sd_consul_rpc_duration_seconds The duration of a Consul RPC call in seconds.
# TYPE prometheus_sd_consul_rpc_duration_seconds summary
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.5"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.9"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.99"} NaN
prometheus_sd_consul_rpc_duration_seconds_sum{call="service",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds_count{call="service",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.5"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.9"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.99"} NaN
prometheus_sd_consul_rpc_duration_seconds_sum{call="services",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds_count{call="services",endpoint="catalog"} 0
# HELP prometheus_sd_consul_rpc_failures_total The number of Consul RPC call failures.
# TYPE prometheus_sd_consul_rpc_failures_total counter
prometheus_sd_consul_rpc_failures_total 0
# HELP prometheus_sd_discovered_targets Current number of discovered targets.
# TYPE prometheus_sd_discovered_targets gauge
prometheus_sd_discovered_targets{config="config-0",name="notify"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/alertmanager/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/blackbox-exporter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/coredns/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/grafana/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-apiserver/0",name="scrape"} 3
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-controller-manager/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-scheduler/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-state-metrics/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-state-metrics/1",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/1",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/2",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/node-exporter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-adapter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-k8s/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-operator/0",name="scrape"} 44
# HELP prometheus_sd_dns_lookup_failures_total The number of DNS-SD lookup failures.
# TYPE prometheus_sd_dns_lookup_failures_total counter
prometheus_sd_dns_lookup_failures_total 0
# HELP prometheus_sd_dns_lookups_total The number of DNS-SD lookups.
# TYPE prometheus_sd_dns_lookups_total counter
prometheus_sd_dns_lookups_total 0
# HELP prometheus_sd_failed_configs Current number of service discovery configurations that failed to load.
# TYPE prometheus_sd_failed_configs gauge
prometheus_sd_failed_configs{name="notify"} 0
prometheus_sd_failed_configs{name="scrape"} 0
# HELP prometheus_sd_file_read_errors_total The number of File-SD read errors.
# TYPE prometheus_sd_file_read_errors_total counter
prometheus_sd_file_read_errors_total 0
# HELP prometheus_sd_file_scan_duration_seconds The duration of the File-SD scan in seconds.
# TYPE prometheus_sd_file_scan_duration_seconds summary
prometheus_sd_file_scan_duration_seconds{quantile="0.5"} NaN
prometheus_sd_file_scan_duration_seconds{quantile="0.9"} NaN
prometheus_sd_file_scan_duration_seconds{quantile="0.99"} NaN
prometheus_sd_file_scan_duration_seconds_sum 0
prometheus_sd_file_scan_duration_seconds_count 0
# HELP prometheus_sd_kubernetes_events_total The number of Kubernetes events handled.
# TYPE prometheus_sd_kubernetes_events_total counter
prometheus_sd_kubernetes_events_total{event="add",role="endpoints"} 48
prometheus_sd_kubernetes_events_total{event="add",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="add",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="add",role="node"} 0
prometheus_sd_kubernetes_events_total{event="add",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="add",role="service"} 48
prometheus_sd_kubernetes_events_total{event="delete",role="endpoints"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="node"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="service"} 0
prometheus_sd_kubernetes_events_total{event="update",role="endpoints"} 1003
prometheus_sd_kubernetes_events_total{event="update",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="update",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="update",role="node"} 0
prometheus_sd_kubernetes_events_total{event="update",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="update",role="service"} 851
# HELP prometheus_sd_kubernetes_http_request_duration_seconds Summary of latencies for HTTP requests to the Kubernetes API by endpoint.
# TYPE prometheus_sd_kubernetes_http_request_duration_seconds summary
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/endpoints"} 121.74581206600004
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/endpoints"} 40
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/pods"} 41.886705843
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/pods"} 44
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/services"} 0.23398366799999998
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/services"} 23
# HELP prometheus_sd_kubernetes_http_request_total Total number of HTTP requests to the Kubernetes API by status code.
# TYPE prometheus_sd_kubernetes_http_request_total counter
prometheus_sd_kubernetes_http_request_total{status_code="200"} 820
# HELP prometheus_sd_kubernetes_workqueue_depth Current depth of the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_depth gauge
prometheus_sd_kubernetes_workqueue_depth{queue_name="endpoints"} 24
# HELP prometheus_sd_kubernetes_workqueue_items_total Total number of items added to the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_items_total counter
prometheus_sd_kubernetes_workqueue_items_total{queue_name="endpoints"} 1831
# HELP prometheus_sd_kubernetes_workqueue_latency_seconds How long an item stays in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_latency_seconds summary
prometheus_sd_kubernetes_workqueue_latency_seconds_sum{queue_name="endpoints"} 3.3407492769999902
prometheus_sd_kubernetes_workqueue_latency_seconds_count{queue_name="endpoints"} 1807
# HELP prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds Duration of the longest running processor in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds gauge
prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds{queue_name="endpoints"} 0
# HELP prometheus_sd_kubernetes_workqueue_unfinished_work_seconds How long an item has remained unfinished in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_unfinished_work_seconds gauge
prometheus_sd_kubernetes_workqueue_unfinished_work_seconds{queue_name="endpoints"} 0
# HELP prometheus_sd_kubernetes_workqueue_work_duration_seconds How long processing an item from the work queue takes.
# TYPE prometheus_sd_kubernetes_workqueue_work_duration_seconds summary
prometheus_sd_kubernetes_workqueue_work_duration_seconds_sum{queue_name="endpoints"} 0.28262065699999955
prometheus_sd_kubernetes_workqueue_work_duration_seconds_count{queue_name="endpoints"} 1807
# HELP prometheus_sd_received_updates_total Total number of update events received from the SD providers.
# TYPE prometheus_sd_received_updates_total counter
prometheus_sd_received_updates_total{name="notify"} 773
prometheus_sd_received_updates_total{name="scrape"} 1034
# HELP prometheus_sd_updates_delayed_total Total number of update events that couldn't be sent immediately.
# TYPE prometheus_sd_updates_delayed_total counter
prometheus_sd_updates_delayed_total{name="notify"} 11
# HELP prometheus_sd_updates_total Total number of update events sent to the SD consumers.
# TYPE prometheus_sd_updates_total counter
prometheus_sd_updates_total{name="notify"} 77
prometheus_sd_updates_total{name="scrape"} 118
# HELP prometheus_target_interval_length_seconds Actual intervals between scrapes.
# TYPE prometheus_target_interval_length_seconds summary
prometheus_target_interval_length_seconds{interval="15s",quantile="0.01"} 14.998068585
prometheus_target_interval_length_seconds{interval="15s",quantile="0.05"} 14.998389585
prometheus_target_interval_length_seconds{interval="15s",quantile="0.5"} 15.000114951
prometheus_target_interval_length_seconds{interval="15s",quantile="0.9"} 15.001035609
prometheus_target_interval_length_seconds{interval="15s",quantile="0.99"} 15.002019479
prometheus_target_interval_length_seconds_sum{interval="15s"} 84120.07245244607
prometheus_target_interval_length_seconds_count{interval="15s"} 5606
prometheus_target_interval_length_seconds{interval="30s",quantile="0.01"} 29.998144878
prometheus_target_interval_length_seconds{interval="30s",quantile="0.05"} 29.998508468
prometheus_target_interval_length_seconds{interval="30s",quantile="0.5"} 30.000040285
prometheus_target_interval_length_seconds{interval="30s",quantile="0.9"} 30.001209116
prometheus_target_interval_length_seconds{interval="30s",quantile="0.99"} 30.00179937
prometheus_target_interval_length_seconds_sum{interval="30s"} 483375.48618420045
prometheus_target_interval_length_seconds_count{interval="30s"} 16112
# HELP prometheus_target_metadata_cache_bytes The number of bytes that are currently used for storing metric metadata in the cache
# TYPE prometheus_target_metadata_cache_bytes gauge
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 15804
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 2108
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/grafana/0"} 3879
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 25597
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 12146
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 1964
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/0"} 20265
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/1"} 8835
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/2"} 504
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 45530
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 7366
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 20502
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 3207
# HELP prometheus_target_metadata_cache_entries Total number of metric metadata entries in the cache
# TYPE prometheus_target_metadata_cache_entries gauge
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 285
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 40
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/grafana/0"} 84
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 328
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 217
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 38
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/0"} 287
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/1"} 174
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/2"} 6
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 935
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 122
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 352
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 56
# HELP prometheus_target_scrape_pool_exceeded_target_limit_total Total number of times scrape pools hit the target limit, during sync or config reload.
# TYPE prometheus_target_scrape_pool_exceeded_target_limit_total counter
prometheus_target_scrape_pool_exceeded_target_limit_total 0
# HELP prometheus_target_scrape_pool_reloads_failed_total Total number of failed scrape pool reloads.
# TYPE prometheus_target_scrape_pool_reloads_failed_total counter
prometheus_target_scrape_pool_reloads_failed_total 0
# HELP prometheus_target_scrape_pool_reloads_total Total number of scrape pool reloads.
# TYPE prometheus_target_scrape_pool_reloads_total counter
prometheus_target_scrape_pool_reloads_total 0
# HELP prometheus_target_scrape_pool_sync_total Total number of syncs that were executed on a scrape pool.
# TYPE prometheus_target_scrape_pool_sync_total counter
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/coredns/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/grafana/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/1"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/2"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 118
# HELP prometheus_target_scrape_pool_targets Current number of targets in this scrape pool.
# TYPE prometheus_target_scrape_pool_targets gauge
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/grafana/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/1"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/2"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 2
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 2
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 1
# HELP prometheus_target_scrape_pools_failed_total Total number of scrape pool creations that failed.
# TYPE prometheus_target_scrape_pools_failed_total counter
prometheus_target_scrape_pools_failed_total 0
# HELP prometheus_target_scrape_pools_total Total number of scrape pool creation attempts.
# TYPE prometheus_target_scrape_pools_total counter
prometheus_target_scrape_pools_total 16
# HELP prometheus_target_scrapes_cache_flush_forced_total How many times a scrape cache was flushed due to getting big while scrapes are failing.
# TYPE prometheus_target_scrapes_cache_flush_forced_total counter
prometheus_target_scrapes_cache_flush_forced_total 0
# HELP prometheus_target_scrapes_exceeded_sample_limit_total Total number of scrapes that hit the sample limit and were rejected.
# TYPE prometheus_target_scrapes_exceeded_sample_limit_total counter
prometheus_target_scrapes_exceeded_sample_limit_total 0
# HELP prometheus_target_scrapes_exemplar_out_of_order_total Total number of exemplar rejected due to not being out of the expected order.
# TYPE prometheus_target_scrapes_exemplar_out_of_order_total counter
prometheus_target_scrapes_exemplar_out_of_order_total 0
# HELP prometheus_target_scrapes_sample_duplicate_timestamp_total Total number of samples rejected due to duplicate timestamps but different values.
# TYPE prometheus_target_scrapes_sample_duplicate_timestamp_total counter
prometheus_target_scrapes_sample_duplicate_timestamp_total 0
# HELP prometheus_target_scrapes_sample_out_of_bounds_total Total number of samples rejected due to timestamp falling outside of the time bounds.
# TYPE prometheus_target_scrapes_sample_out_of_bounds_total counter
prometheus_target_scrapes_sample_out_of_bounds_total 0
# HELP prometheus_target_scrapes_sample_out_of_order_total Total number of samples rejected due to not being out of the expected order.
# TYPE prometheus_target_scrapes_sample_out_of_order_total counter
prometheus_target_scrapes_sample_out_of_order_total 0
# HELP prometheus_target_sync_length_seconds Actual interval to sync the scrape pool.
# TYPE prometheus_target_sync_length_seconds summary
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.01"} 0.001180311
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.05"} 0.001180311
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.5"} 0.001323402
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.9"} 0.002732891
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.99"} 0.002732891
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 0.31129993499999997
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.01"} 0.001118503
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.05"} 0.001118503
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.5"} 0.001288263
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.9"} 0.003551095
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.99"} 0.003551095
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 0.35547638200000015
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.01"} 0.00013841
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.05"} 0.00013841
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.5"} 0.00017262
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.9"} 0.000504102
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.99"} 0.000504102
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/coredns/0"} 0.03321076300000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/coredns/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.01"} 0.001157473
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.05"} 0.001157473
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.5"} 0.001196569
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.9"} 0.009884538
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.99"} 0.009884538
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/grafana/0"} 0.35717074400000015
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/grafana/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.01"} 0.000124401
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.05"} 0.000124401
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.5"} 0.00019638
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.9"} 0.000301557
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.99"} 0.000301557
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 0.024725554999999993
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.01"} 0.000182843
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.05"} 0.000182843
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.5"} 0.000222847
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.9"} 0.000363633
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.99"} 0.000363633
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0.10367830199999997
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.01"} 0.000162849
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.05"} 0.000162849
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.5"} 0.000171137
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.9"} 0.001299137
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.99"} 0.001299137
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0.027109514999999994
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.01"} 0.001387123
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.05"} 0.001387123
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.5"} 0.00143535
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.9"} 0.003176553
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.99"} 0.003176553
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 0.24503347199999986
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.01"} 0.00112645
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.05"} 0.00112645
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.5"} 0.001250423
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.9"} 0.004949593
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.99"} 0.004949593
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 0.3167213809999998
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.01"} 0.000263929
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.05"} 0.000263929
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.5"} 0.000290283
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.9"} 0.000508372
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.99"} 0.000508372
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/0"} 0.1427956660000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.01"} 0.000234882
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.05"} 0.000234882
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.5"} 0.000332774
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.9"} 0.001499824
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.99"} 0.001499824
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/1"} 0.13505725500000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/1"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.01"} 0.000241385
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.05"} 0.000241385
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.5"} 0.000358913
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.9"} 0.000496108
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.99"} 0.000496108
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/2"} 0.1453281079999999
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/2"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.01"} 0.001439707
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.05"} 0.001439707
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.5"} 0.001480579
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.9"} 0.003830324
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.99"} 0.003830324
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 0.3816530239999999
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.01"} 0.001664772
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.05"} 0.001664772
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.5"} 0.001780822
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.9"} 0.007717476
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.99"} 0.007717476
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 0.33285981600000003
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.01"} 0.001094397
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.05"} 0.001094397
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.5"} 0.001297703
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.9"} 0.002738727
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.99"} 0.002738727
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 0.34355014900000014
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.01"} 0.00107138
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.05"} 0.00107138
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.5"} 0.001113514
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.9"} 0.002571519
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.99"} 0.002571519
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 0.233227486
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 118
# HELP prometheus_template_text_expansion_failures_total The total number of template text expansion failures.
# TYPE prometheus_template_text_expansion_failures_total counter
prometheus_template_text_expansion_failures_total 0
# HELP prometheus_template_text_expansions_total The total number of template text expansions.
# TYPE prometheus_template_text_expansions_total counter
prometheus_template_text_expansions_total 49562
# HELP prometheus_treecache_watcher_goroutines The current number of watcher goroutines.
# TYPE prometheus_treecache_watcher_goroutines gauge
prometheus_treecache_watcher_goroutines 0
# HELP prometheus_treecache_zookeeper_failures_total The total number of ZooKeeper failures.
# TYPE prometheus_treecache_zookeeper_failures_total counter
prometheus_treecache_zookeeper_failures_total 0
# HELP prometheus_tsdb_blocks_loaded Number of currently loaded data blocks
# TYPE prometheus_tsdb_blocks_loaded gauge
prometheus_tsdb_blocks_loaded 6
# HELP prometheus_tsdb_checkpoint_creations_failed_total Total number of checkpoint creations that failed.
# TYPE prometheus_tsdb_checkpoint_creations_failed_total counter
prometheus_tsdb_checkpoint_creations_failed_total 0
# HELP prometheus_tsdb_checkpoint_creations_total Total number of checkpoint creations attempted.
# TYPE prometheus_tsdb_checkpoint_creations_total counter
prometheus_tsdb_checkpoint_creations_total 2
# HELP prometheus_tsdb_checkpoint_deletions_failed_total Total number of checkpoint deletions that failed.
# TYPE prometheus_tsdb_checkpoint_deletions_failed_total counter
prometheus_tsdb_checkpoint_deletions_failed_total 0
# HELP prometheus_tsdb_checkpoint_deletions_total Total number of checkpoint deletions attempted.
# TYPE prometheus_tsdb_checkpoint_deletions_total counter
prometheus_tsdb_checkpoint_deletions_total 2
# HELP prometheus_tsdb_compaction_chunk_range_seconds Final time range of chunks on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_range_seconds histogram
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="100"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="400"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="1600"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="6400"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="25600"} 77
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="102400"} 659
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="409600"} 1342
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="1.6384e+06"} 63035
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="6.5536e+06"} 445464
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="2.62144e+07"} 445559
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_range_seconds_sum 1.241090818295e+12
prometheus_tsdb_compaction_chunk_range_seconds_count 445559
# HELP prometheus_tsdb_compaction_chunk_samples Final number of samples on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_samples histogram
prometheus_tsdb_compaction_chunk_samples_bucket{le="4"} 680
prometheus_tsdb_compaction_chunk_samples_bucket{le="6"} 812
prometheus_tsdb_compaction_chunk_samples_bucket{le="9"} 872
prometheus_tsdb_compaction_chunk_samples_bucket{le="13.5"} 1420
prometheus_tsdb_compaction_chunk_samples_bucket{le="20.25"} 54207
prometheus_tsdb_compaction_chunk_samples_bucket{le="30.375"} 56268
prometheus_tsdb_compaction_chunk_samples_bucket{le="45.5625"} 57798
prometheus_tsdb_compaction_chunk_samples_bucket{le="68.34375"} 60661
prometheus_tsdb_compaction_chunk_samples_bucket{le="102.515625"} 118251
prometheus_tsdb_compaction_chunk_samples_bucket{le="153.7734375"} 442730
prometheus_tsdb_compaction_chunk_samples_bucket{le="230.66015625"} 445559
prometheus_tsdb_compaction_chunk_samples_bucket{le="345.990234375"} 445559
prometheus_tsdb_compaction_chunk_samples_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_samples_sum 4.4569207e+07
prometheus_tsdb_compaction_chunk_samples_count 445559
# HELP prometheus_tsdb_compaction_chunk_size_bytes Final size of chunks on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_size_bytes histogram
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="32"} 37379
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="48"} 53055
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="72"} 162893
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="108"} 278966
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="162"} 358962
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="243"} 388819
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="364.5"} 407643
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="546.75"} 426213
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="820.125"} 433195
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="1230.1875"} 445144
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="1845.28125"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="2767.921875"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_sum 6.5486443e+07
prometheus_tsdb_compaction_chunk_size_bytes_count 445559
# HELP prometheus_tsdb_compaction_duration_seconds Duration of compaction runs
# TYPE prometheus_tsdb_compaction_duration_seconds histogram
prometheus_tsdb_compaction_duration_seconds_bucket{le="1"} 1
prometheus_tsdb_compaction_duration_seconds_bucket{le="2"} 3
prometheus_tsdb_compaction_duration_seconds_bucket{le="4"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="8"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="16"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="32"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="64"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="128"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="256"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="512"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="1024"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="2048"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="4096"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="8192"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="+Inf"} 4
prometheus_tsdb_compaction_duration_seconds_sum 5.190013199000001
prometheus_tsdb_compaction_duration_seconds_count 4
# HELP prometheus_tsdb_compaction_populating_block Set to 1 when a block is currently being written to the disk.
# TYPE prometheus_tsdb_compaction_populating_block gauge
prometheus_tsdb_compaction_populating_block 0
# HELP prometheus_tsdb_compactions_failed_total Total number of compactions that failed for the partition.
# TYPE prometheus_tsdb_compactions_failed_total counter
prometheus_tsdb_compactions_failed_total 0
# HELP prometheus_tsdb_compactions_skipped_total Total number of skipped compactions due to disabled auto compaction.
# TYPE prometheus_tsdb_compactions_skipped_total counter
prometheus_tsdb_compactions_skipped_total 0
# HELP prometheus_tsdb_compactions_total Total number of compactions that were executed for the partition.
# TYPE prometheus_tsdb_compactions_total counter
prometheus_tsdb_compactions_total 4
# HELP prometheus_tsdb_compactions_triggered_total Total number of triggered compactions for the partition.
# TYPE prometheus_tsdb_compactions_triggered_total counter
prometheus_tsdb_compactions_triggered_total 357
# HELP prometheus_tsdb_data_replay_duration_seconds Time taken to replay the data on disk.
# TYPE prometheus_tsdb_data_replay_duration_seconds gauge
prometheus_tsdb_data_replay_duration_seconds 12.107785783
# HELP prometheus_tsdb_head_active_appenders Number of currently active appender transactions
# TYPE prometheus_tsdb_head_active_appenders gauge
prometheus_tsdb_head_active_appenders 0
# HELP prometheus_tsdb_head_chunks Total number of chunks in the head block.
# TYPE prometheus_tsdb_head_chunks gauge
prometheus_tsdb_head_chunks 153944
# HELP prometheus_tsdb_head_chunks_created_total Total number of chunks created in the head
# TYPE prometheus_tsdb_head_chunks_created_total counter
prometheus_tsdb_head_chunks_created_total 599503
# HELP prometheus_tsdb_head_chunks_removed_total Total number of chunks removed in the head
# TYPE prometheus_tsdb_head_chunks_removed_total counter
prometheus_tsdb_head_chunks_removed_total 445559
# HELP prometheus_tsdb_head_gc_duration_seconds Runtime of garbage collection in the head block.
# TYPE prometheus_tsdb_head_gc_duration_seconds summary
prometheus_tsdb_head_gc_duration_seconds_sum 0.294262921
prometheus_tsdb_head_gc_duration_seconds_count 4
# HELP prometheus_tsdb_head_max_time Maximum timestamp of the head block. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_head_max_time gauge
prometheus_tsdb_head_max_time 1.626250946508e+12
# HELP prometheus_tsdb_head_max_time_seconds Maximum timestamp of the head block.
# TYPE prometheus_tsdb_head_max_time_seconds gauge
prometheus_tsdb_head_max_time_seconds 1.626250946e+09
# HELP prometheus_tsdb_head_min_time Minimum time bound of the head block. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_head_min_time gauge
prometheus_tsdb_head_min_time 1.626242402664e+12
# HELP prometheus_tsdb_head_min_time_seconds Minimum time bound of the head block.
# TYPE prometheus_tsdb_head_min_time_seconds gauge
prometheus_tsdb_head_min_time_seconds 1.626242402e+09
# HELP prometheus_tsdb_head_samples_appended_total Total number of appended samples.
# TYPE prometheus_tsdb_head_samples_appended_total counter
prometheus_tsdb_head_samples_appended_total 4.855752e+07
# HELP prometheus_tsdb_head_series Total number of series in the head block.
# TYPE prometheus_tsdb_head_series gauge
prometheus_tsdb_head_series 73035
# HELP prometheus_tsdb_head_series_created_total Total number of series created in the head
# TYPE prometheus_tsdb_head_series_created_total counter
prometheus_tsdb_head_series_created_total 226914
# HELP prometheus_tsdb_head_series_not_found_total Total number of requests for series that were not found.
# TYPE prometheus_tsdb_head_series_not_found_total counter
prometheus_tsdb_head_series_not_found_total 0
# HELP prometheus_tsdb_head_series_removed_total Total number of series removed in the head
# TYPE prometheus_tsdb_head_series_removed_total counter
prometheus_tsdb_head_series_removed_total 153879
# HELP prometheus_tsdb_head_truncations_failed_total Total number of head truncations that failed.
# TYPE prometheus_tsdb_head_truncations_failed_total counter
prometheus_tsdb_head_truncations_failed_total 0
# HELP prometheus_tsdb_head_truncations_total Total number of head truncations attempted.
# TYPE prometheus_tsdb_head_truncations_total counter
prometheus_tsdb_head_truncations_total 4
# HELP prometheus_tsdb_isolation_high_watermark The highest TSDB append ID that has been given out.
# TYPE prometheus_tsdb_isolation_high_watermark gauge
prometheus_tsdb_isolation_high_watermark 144550
# HELP prometheus_tsdb_isolation_low_watermark The lowest TSDB append ID that is still referenced.
# TYPE prometheus_tsdb_isolation_low_watermark gauge
prometheus_tsdb_isolation_low_watermark 144550
# HELP prometheus_tsdb_lowest_timestamp Lowest timestamp value stored in the database. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_lowest_timestamp gauge
prometheus_tsdb_lowest_timestamp 1.6261488e+12
# HELP prometheus_tsdb_lowest_timestamp_seconds Lowest timestamp value stored in the database.
# TYPE prometheus_tsdb_lowest_timestamp_seconds gauge
prometheus_tsdb_lowest_timestamp_seconds 1.6261488e+09
# HELP prometheus_tsdb_mmap_chunk_corruptions_total Total number of memory-mapped chunk corruptions.
# TYPE prometheus_tsdb_mmap_chunk_corruptions_total counter
prometheus_tsdb_mmap_chunk_corruptions_total 0
# HELP prometheus_tsdb_out_of_bound_samples_total Total number of out of bound samples ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_bound_samples_total counter
prometheus_tsdb_out_of_bound_samples_total 0
# HELP prometheus_tsdb_out_of_order_exemplars_total Total number of out of order exemplars ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_order_exemplars_total counter
prometheus_tsdb_out_of_order_exemplars_total 0
# HELP prometheus_tsdb_out_of_order_samples_total Total number of out of order samples ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_order_samples_total counter
prometheus_tsdb_out_of_order_samples_total 14390
# HELP prometheus_tsdb_reloads_failures_total Number of times the database failed to reloadBlocks block data from disk.
# TYPE prometheus_tsdb_reloads_failures_total counter
prometheus_tsdb_reloads_failures_total 0
# HELP prometheus_tsdb_reloads_total Number of times the database reloaded block data from disk.
# TYPE prometheus_tsdb_reloads_total counter
prometheus_tsdb_reloads_total 354
# HELP prometheus_tsdb_retention_limit_bytes Max number of bytes to be retained in the tsdb blocks, configured 0 means disabled
# TYPE prometheus_tsdb_retention_limit_bytes gauge
prometheus_tsdb_retention_limit_bytes 0
# HELP prometheus_tsdb_size_retentions_total The number of times that blocks were deleted because the maximum number of bytes was exceeded.
# TYPE prometheus_tsdb_size_retentions_total counter
prometheus_tsdb_size_retentions_total 0
# HELP prometheus_tsdb_storage_blocks_bytes The number of bytes that are currently used for local storage by all blocks.
# TYPE prometheus_tsdb_storage_blocks_bytes gauge
prometheus_tsdb_storage_blocks_bytes 1.73152149e+08
# HELP prometheus_tsdb_symbol_table_size_bytes Size of symbol table in memory for loaded blocks
# TYPE prometheus_tsdb_symbol_table_size_bytes gauge
prometheus_tsdb_symbol_table_size_bytes 4864
# HELP prometheus_tsdb_time_retentions_total The number of times that blocks were deleted because the maximum time limit was exceeded.
# TYPE prometheus_tsdb_time_retentions_total counter
prometheus_tsdb_time_retentions_total 3
# HELP prometheus_tsdb_tombstone_cleanup_seconds The time taken to recompact blocks to remove tombstones.
# TYPE prometheus_tsdb_tombstone_cleanup_seconds histogram
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.005"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.01"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.025"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.05"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.1"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.25"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="1"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="2.5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="10"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="+Inf"} 0
prometheus_tsdb_tombstone_cleanup_seconds_sum 0
prometheus_tsdb_tombstone_cleanup_seconds_count 0
# HELP prometheus_tsdb_vertical_compactions_total Total number of compactions done on overlapping blocks.
# TYPE prometheus_tsdb_vertical_compactions_total counter
prometheus_tsdb_vertical_compactions_total 0
# HELP prometheus_tsdb_wal_completed_pages_total Total number of completed pages.
# TYPE prometheus_tsdb_wal_completed_pages_total counter
prometheus_tsdb_wal_completed_pages_total 7863
# HELP prometheus_tsdb_wal_corruptions_total Total number of WAL corruptions.
# TYPE prometheus_tsdb_wal_corruptions_total counter
prometheus_tsdb_wal_corruptions_total 0
# HELP prometheus_tsdb_wal_fsync_duration_seconds Duration of WAL fsync.
# TYPE prometheus_tsdb_wal_fsync_duration_seconds summary
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.5"} NaN
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.9"} NaN
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.99"} NaN
prometheus_tsdb_wal_fsync_duration_seconds_sum 0.002690881
prometheus_tsdb_wal_fsync_duration_seconds_count 4
# HELP prometheus_tsdb_wal_page_flushes_total Total number of page flushes.
# TYPE prometheus_tsdb_wal_page_flushes_total counter
prometheus_tsdb_wal_page_flushes_total 75330
# HELP prometheus_tsdb_wal_segment_current WAL segment index that TSDB is currently writing to.
# TYPE prometheus_tsdb_wal_segment_current gauge
prometheus_tsdb_wal_segment_current 28
# HELP prometheus_tsdb_wal_truncate_duration_seconds Duration of WAL truncation.
# TYPE prometheus_tsdb_wal_truncate_duration_seconds summary
prometheus_tsdb_wal_truncate_duration_seconds_sum 1.919097646
prometheus_tsdb_wal_truncate_duration_seconds_count 2
# HELP prometheus_tsdb_wal_truncations_failed_total Total number of WAL truncations that failed.
# TYPE prometheus_tsdb_wal_truncations_failed_total counter
prometheus_tsdb_wal_truncations_failed_total 0
# HELP prometheus_tsdb_wal_truncations_total Total number of WAL truncations attempted.
# TYPE prometheus_tsdb_wal_truncations_total counter
prometheus_tsdb_wal_truncations_total 2
# HELP prometheus_tsdb_wal_writes_failed_total Total number of WAL writes that failed.
# TYPE prometheus_tsdb_wal_writes_failed_total counter
prometheus_tsdb_wal_writes_failed_total 0
# HELP prometheus_web_federation_errors_total Total number of errors that occurred while sending federation responses.
# TYPE prometheus_web_federation_errors_total counter
prometheus_web_federation_errors_total 0
# HELP prometheus_web_federation_warnings_total Total number of warnings that occurred while sending federation responses.
# TYPE prometheus_web_federation_warnings_total counter
prometheus_web_federation_warnings_total 0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1398
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
http://192.168.153.40:30207/metrics 自帶的一些 Histogram 信息

  如上述按理所示,Histogram 類型的樣本會提供 3 種指標,假設指標名稱為<basename>。

# HELP prometheus_http_request_duration_seconds Histogram of latencies for HTTP requests.
# TYPE prometheus_http_request_duration_seconds histogram
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05
prometheus_http_request_duration_seconds_count{handler="/"} 1

  樣本的值分布在 Bucket 中的數量,命名為<basename>_backet{le="<上邊界>"}。這個值表示指標最小等於上邊界的所有樣本數量。上述案例中的prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1, 1就代表在總共的1次請求中,HTTP 請求響應時間 <= 0.1s的請求一共1次。

  所有樣本值的總和,命名為< basename>_sum。上述案例中的 prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05 表示發生的1次 HTTP請求總響應時間是 2.3757e-05s。

  樣本總數,命名為<basename>_count,其值和<basename>_bucket{le="+Inf"} 相同。

  sum 函數和 count 函數相除,可以得到一些平均值,比如 Prometheus 一天內的平均壓縮時間,可由查詢結果除以 instance 標簽數量得到,如下所示:

sum without(instance) (rate(prometheus_tsdb_compaction_duration_sum[1d])) / sum without(instance) (rate(prometheus_tsdb_compction_duration_count[1d]))

  除了 Prometheus 內置的壓縮時間,prometheus_local_storage_series_chunks_persisted 表示 Prometheus 中每個時序需要存儲的 chunk 數量,也可以用於計算待持久化的數據的分位數。

3.4 摘要

  與 Histogram 類型類似,摘要用於表示一段時間內的數據采樣的結果(通常是請求持續時間或響應大小等),但它直接存儲了分位數(通過客戶端計算,然后展示出來),而非通過區間計算。因此,對於分位數計算,Summary在通過PromQL進行查詢時會有更好的性能表現,而 Histogram 則會消耗更多的資源。反之,對於客戶端而言,Histogram 消耗的資源更少。在選擇這兩種方式時,用戶應該根據座機的實際場景選擇。


知識延伸: Summary 和 Histogram 的異同

1) 它們都包含了<basename>_sum 和<base_name>_count 指標。

2) Histogram 需要通過<basename>_bucket來計算分位數,而 Summary 則直接存儲了分位數的值。

3) 如果需要匯總或者了解要觀察的值的范圍和分布,建議使用 Histogram;如果並不在乎要觀察的值的范圍和分布,僅需要精確的 quantile 值,那么建議使用 Summary。


四、13種聚合操作

  在實際生產環節中,往往有着成百上千的實例,用戶不可能逐個篩選每個實例的指標。聚合操作(Aggregation Operator)允許用戶在一個應用程序中或多個應用程序之間對指標進行聚合計算,可以對瞬時表達式返回的樣本數據進行聚合,形成一個具有較少樣本的新的時間序列。聚合操作只對瞬時向量起作用,輸出的也是瞬時向量。

#查詢系統所有 HTTP 請求的總量
sum(http_request_total)

#按照mode計算主機 CPU 的平均使用時間
avg(node_cpu) by (mode)

#查詢各個主機的 CPU 使用率
sum(sum(irate(node_cpu{mode!='idle'}[5m])) / sum(irate(node_cpu[5m]))) by (instance)
  • sum(求和)
  • min(最小值)
  • max(最大值)
  • avg(平均值)
  • stddev(標准差)
  • stdvar(標准差異)
  • count(計數)
  • count_values(對 value 進行計數)
  • bottomk(樣本值最小的 k 個元素)
  • topk(樣本值最大的 k 個元素)
  • quantile(分布統計)

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM