第七模塊 :微服務監控告警Prometheus架構和實踐


119.監控模式分類~1.mp4

 

logging:日志監控,Logging 的特點是,它描述一些離散的(不連續的)事件。 例如:應用通過一個滾動的文件輸出 Debug 或 Error 信息,並通過日志收集系統,存儲到 Elasticsearch 中; 審批明細信息通過 Kafka,存儲到數據庫(BigTable)中; 又或者,特定請求的元數據信息,從服務請求中剝離出來,發送給一個異常收集服務,如 NewRelic。

 

tracing:鏈路追蹤 ,例如skywalking、cat、zipkin專門做分布式鏈路追蹤

 

metrics:重要的數據指標、例如統計當前http的一個請求量,數據是可以度量、累加的,數據是可以聚合和累加的。通過對數據點進行聚合查看重要的指標,里面包括一些計數器、測量儀器、直方圖等,還可以

在上面打標簽

 

 

  • Metrics - 用於記錄可聚合的數據。例如,隊列的當前深度可被定義為一個度量值,在元素入隊或出隊時被更新;HTTP 請求個數可被定義為一個計數器,新請求到來時進行累加。

普羅米修斯重點屬於metrics的監控

上面重點講下幾個工具部署的成本:

capEx表示研發人員開發的成本:metrics研發人員開發成功重點,elk最低,掌握skywalking需要一定的基礎

OpEx表示運維成本,elk在運維的時候需要不斷擴容,運行成功最高

reaction表示當出問題的時候哪些功能能夠第一時間進行告警,metries工具最高。skywalking的告警能力一般

出了問題具體分析問題哪些工具最有效:查看調用鏈解決問題最有效

 

Logging,Metrics 和 Tracing

Logging,Metrics 和 Tracing 有各自專注的部分。

  • Logging - 用於記錄離散的事件。例如,應用程序的調試信息或錯誤信息。它是我們診斷問題的依據。
  • Metrics - 用於記錄可聚合的數據。例如,隊列的當前深度可被定義為一個度量值,在元素入隊或出隊時被更新;HTTP 請求個數可被定義為一個計數器,新請求到來時進行累加。
  • Tracing - 用於記錄請求范圍內的信息。例如,一次遠程方法調用的執行過程和耗時。它是我們排查系統性能問題的利器。

這三者也有相互重疊的部分,如下圖所示。

通過上述信息,我們可以對已有系統進行分類。例如,Zipkin 專注於 tracing 領域;Prometheus 開始專注於 metrics,隨着時間推移可能會集成更多的 tracing 功能,但不太可能深入 logging 領域; ELK,阿里雲日志服務這樣的系統開始專注於 logging 領域,但同時也不斷地集成其他領域的特性到系統中來,正向上圖中的圓心靠近。

三者關系的一篇論文:http://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html

關於三者關系的更詳細信息可參考 Metrics, tracing, and logging。下面我們重點介紹下 tracing。

Tracing 的誕生

Tracing 是在90年代就已出現的技術。但真正讓該領域流行起來的還是源於 Google 的一篇論文"Dapper, a Large-Scale Distributed Systems Tracing Infrastructure",而另一篇論文"Uncertainty in Aggregate Estimates from Sampled Distributed Traces"中則包含關於采樣的更詳細分析。論文發表后一批優秀的 Tracing 軟件孕育而生,比較流行的有:

  • Dapper(Google) : 各 tracer 的基礎
  • StackDriver Trace (Google)
  • Zipkin(twitter)
  • Appdash(golang)
  • 鷹眼(taobao)
  • 諦聽(盤古,阿里雲雲產品使用的Trace系統)
  • 雲圖(螞蟻Trace系統)
  • sTrace(神馬)
  • X-ray(aws)

分布式追蹤系統發展很快,種類繁多,但核心步驟一般有三個:代碼埋點,數據存儲、查詢展示。

下圖是一個分布式調用的例子,客戶端發起請求,請求首先到達負載均衡器,接着經過認證服務,計費服務,然后請求資源,最后返回結果。

opentracing1.png

數據被采集存儲后,分布式追蹤系統一般會選擇使用包含時間軸的時序圖來呈現這個 Trace。

opentracing2.png

但在數據采集過程中,由於需要侵入用戶代碼,並且不同系統的 API 並不兼容,這就導致了如果您希望切換追蹤系統,往往會帶來較大改動。

OpenTracing

為了解決不同的分布式追蹤系統 API 不兼容的問題,誕生了 OpenTracing 規范。
OpenTracing 是一個輕量級的標准化層,它位於應用程序/類庫和追蹤或日志分析程序之間。

+-------------+  +---------+  +----------+  +------------+
| Application | | Library | | OSS | | RPC/IPC | | Code | | Code | | Services | | Frameworks | +-------------+ +---------+ +----------+ +------------+ | | | | | | | | v v v v +------------------------------------------------------+ | OpenTracing | +------------------------------------------------------+ | | | | | | | | v v v v +-----------+ +-------------+ +-------------+ +-----------+ | Tracing | | Logging | | Metrics | | Tracing | | System A | | Framework B | | Framework C | | System D | +-----------+ +-------------+ +-------------+ +-----------+

OpenTracing 的優勢

  • OpenTracing 已進入 CNCF,正在為全球的分布式追蹤,提供統一的概念和數據標准。
  • OpenTracing 通過提供平台無關、廠商無關的 API,使得開發人員能夠方便的添加(或更換)追蹤系統的實現。

OpenTracing 數據模型

OpenTracing 中的 Trace(調用鏈)通過歸屬於此調用鏈的 Span 來隱性的定義。
特別說明,一條 Trace(調用鏈)可以被認為是一個由多個 Span 組成的有向無環圖(DAG圖),Span 與 Span 的關系被命名為 References。

例如:下面的示例 Trace 就是由8個 Span 組成:

單個 Trace 中,span 間的因果關系


        [Span A]  ←←←(the root span)
            | +------+------+ | | [Span B] [Span C] ←←←(Span C 是 Span A 的孩子節點, ChildOf) | | [Span D] +---+-------+ | | [Span E] [Span F] >>> [Span G] >>> [Span H] ↑ ↑ ↑ (Span G 在 Span F 后被調用, FollowsFrom) 

有些時候,使用下面這種,基於時間軸的時序圖可以更好的展現 Trace(調用鏈):

單個 Trace 中,span 間的時間關系


––|–––––––|–––––––|–––––––|–––––––|–––––––|–––––––|–––––––|–> time [Span A···················································] [Span B··············································] [Span D··········································] [Span C········································] [Span E·······] [Span F··] [Span G··] [Span H··]

每個 Span 包含以下的狀態:(譯者注:由於這些狀態會反映在 OpenTracing API 中,所以會保留部分英文說明)

  • An operation name,操作名稱
  • A start timestamp,起始時間
  • A finish timestamp,結束時間
  • Span Tag,一組鍵值對構成的 Span 標簽集合。鍵值對中,鍵必須為 string,值可以是字符串,布爾,或者數字類型。
  • Span Log,一組 span 的日志集合。
    每次 log 操作包含一個鍵值對,以及一個時間戳。

鍵值對中,鍵必須為 string,值可以是任意類型。
但是需要注意,不是所有的支持 OpenTracing 的 Tracer,都需要支持所有的值類型。

  • SpanContext,Span 上下文對象 (下面會詳細說明)
  • References(Span間關系),相關的零個或者多個 Span(Span 間通過 SpanContext 建立這種關系)

每一個 SpanContext 包含以下狀態:

  • 任何一個 OpenTracing 的實現,都需要將當前調用鏈的狀態(例如:trace 和 span 的 id),依賴一個獨特的 Span 去跨進程邊界傳輸
  • Baggage Items,Trace 的隨行數據,是一個鍵值對集合,它存在於 trace 中,也需要跨進程邊界傳輸

更多關於 OpenTracing 數據模型的知識,請參考 OpenTracing語義標准

OpenTracing 實現

這篇文檔列出了所有 OpenTracing 實現。在這些實現中,比較流行的為 Jaeger 和 Zipkin

 

 

 

 

metrs主要用於監控告警,出了問題之后再通過鏈路追蹤或者elk來查看發現問題解決問題

metries可以對系統層、應用層、業務層進行監控

121.Prometheus 簡介~1.mp4

時間數據庫

在t0產生數據v0,在t1產生數據v1,在t2產生數據v2,將這些數據點弄起來就是一個時間序列

infuxdb和普羅米修斯都是時間序列數據庫

122.Prometheus 架構設計~1.mp4

123.Prometheus 基本概念~1.mp4

第一種計數器:例如統計的http數目、下單數目等

測量儀器:例如當前同時在線用戶數目。磁盤使用率

直方圖:響應時間在某個區間內的分布情況

匯總:90%的響應時間

target:可以是操作系統、機器、應用、服務等需要暴露metrries端點,每隔15秒中通過/metries抓取數據

應用直接采用:應用直接采集,直接在應用程序中埋點,直接使用普羅米修斯采集

第二種間接采集使用,使用exporter采集,執行的exporter如下所示,redis、apache、操作系統等

 

 

 web ui和grafan通過promql就可以來查詢對於的數據

 

 

 

 124.【實驗】Prometheus 起步查詢實驗(上)~1.mp4

第一步

將http-simulatror導入到eclipse中

springboot集成prometheus

Maven pom.xml引入依賴

<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_spring_boot</artifactId>
</dependency>

2 啟動類引入注解

import io.prometheus.client.spring.boot.EnablePrometheusEndpoint;
import io.prometheus.client.spring.boot.EnableSpringBootMetricsCollector;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
@EnablePrometheusEndpoint
@EnableSpringBootMetricsCollector
public class Application {

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
3 Controller類寫需要監控的指標,比如Counter

import io.prometheus.client.Counter;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Random;

@RestController
public class SampleController {

    private static Random random = new Random();

    private static final Counter requestTotal = Counter.build()
        .name("my_sample_counter")
        .labelNames("status")
        .help("A simple Counter to illustrate custom Counters in Spring Boot and Prometheus").register();

    @RequestMapping("/endpoint")
    public void endpoint() {
        if (random.nextInt(2) > 0) {
            requestTotal.labels("success").inc();
        } else {
            requestTotal.labels("error").inc();
        }
    }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

4 設置springboot應用的服務名和端口,在application.properties

spring.application.name=mydemo
server.port=8888

5 配置prometheus.yml

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'mydemo'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    metrics_path: '/prometheus'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['10.94.20.52:8888']

最關鍵的配置就是targets: [‘10.94.20.52:8888’],就是springboot應用的ip和端口

注:在application.xml里設置屬性:spring.metrics.servo.enabled=false,去掉重復的metrics,不然在prometheus的控制台的targets頁簽里,會一直顯示此endpoint為down狀態。

 

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>io.spring2go.promdemo</groupId>
    <artifactId>http-simulator</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>http-simulator</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.17.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        
        <!-- The prometheus client -->
        <dependency>
            <groupId>io.prometheus</groupId>
            <artifactId>simpleclient_spring_boot</artifactId>
            <version>0.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>


</project>

 啟動類:

package io.spring2go.promdemo.httpsimulator;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.event.ContextClosedEvent;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
import org.springframework.core.task.TaskExecutor;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

import io.prometheus.client.spring.boot.EnablePrometheusEndpoint;

@Controller
@SpringBootApplication
@EnablePrometheusEndpoint public class HttpSimulatorApplication implements ApplicationListener<ContextClosedEvent> {

    @Autowired
    private SimulatorOpts opts;

    private ActivitySimulator simulator;

    public static void main(String[] args) {

        SpringApplication.run(HttpSimulatorApplication.class, args);
    }

    @RequestMapping(value = "/opts")
    public @ResponseBody String getOps() {
        return opts.toString();
    }

    @RequestMapping(value = "/spike/{mode}", method = RequestMethod.POST)
    public @ResponseBody String setSpikeMode(@PathVariable("mode") String mode) {
        boolean result = simulator.setSpikeMode(mode);
        if (result) {
            return "ok";
        } else {
            return "wrong spike mode " + mode;
        }
    }

    @RequestMapping(value = "error_rate/{error_rate}", method = RequestMethod.POST)
    public @ResponseBody String setErrorRate(@PathVariable("error_rate") int errorRate) {
        simulator.setErrorRate(errorRate);
        return "ok";
    }

    @Bean
    public TaskExecutor taskExecutor() {
        return new SimpleAsyncTaskExecutor();
    }

    @Bean
    public CommandLineRunner schedulingRunner(TaskExecutor executor) {
        return new CommandLineRunner() {
            public void run(String... args) throws Exception {
                simulator = new ActivitySimulator(opts);
                executor.execute(simulator);
                System.out.println("Simulator thread started...");
            }
        };
    }

    @Override
    public void onApplicationEvent(ContextClosedEvent event) {
        simulator.shutdown();
        System.out.println("Simulator shutdown...");
    }

}

 

 application.properties

management.security.enabled=false

opts.endpoints=/login, /login, /login, /login, /login, /login, /login, /users, /users, /users, /users/{id}, /register, /register, /logout, /logout, /logout, /logout
opts.request_rate=1000
opts.request_rate_uncertainty=70
opts.latency_min=10
opts.latency_p50=25
opts.latency_p90=150
opts.latency_p99=750
opts.latency_max=10000
opts.latency_uncertainty=70

opts.error_rate=1
opts.spike_start_chance=5
opts.spike_end_chance=30

 

最關鍵的核心類

ActivitySimulator

package io.spring2go.promdemo.httpsimulator;

import java.util.Random;

import io.prometheus.client.Counter;
import io.prometheus.client.Histogram;

public class ActivitySimulator implements Runnable {

    private SimulatorOpts opts;

    private Random rand = new Random();

    private boolean spikeMode = false;
    
    private volatile boolean shutdown = false;
    
    private final Counter httpRequestsTotal = Counter.build() .name("http_requests_total") .help("Total number of http requests by response status code") .labelNames("endpoint", "status") .register(); private final Histogram httpRequestDurationMs = Histogram.build() .name("http_request_duration_milliseconds") .help("Http request latency histogram") .exponentialBuckets(25, 2, 7) .labelNames("endpoint", "status") .register(); public ActivitySimulator(SimulatorOpts opts) {
        this.opts = opts;
        System.out.println(opts);
    }
    
    public void shutdown() {
        this.shutdown = true;
    }

    public void updateOpts(SimulatorOpts opts) {
        this.opts = opts;
    }
    
    public boolean setSpikeMode(String mode) {
        boolean result = true;
        switch (mode) {
        case "on":
            opts.setSpikeMode(SpikeMode.ON);
            System.out.println("Spike mode is set to " + mode);
            break;
        case "off":
            opts.setSpikeMode(SpikeMode.OFF);
            System.out.println("Spike mode is set to " + mode);
            break;
        case "random":
            opts.setSpikeMode(SpikeMode.RANDOM);
            System.out.println("Spike mode is set to " + mode);
            break;
        default:
            result = false;
            System.out.println("Can't recognize spike mode " + mode);
        }
        return result;
    }

    public void setErrorRate(int rate) {
        if (rate > 100) {
            rate = 100;
        }
        if (rate < 0) {
            rate = 0;
        }
        opts.setErrorRate(rate);
        System.out.println("Error rate is set to " + rate);
    }

    public SimulatorOpts getOpts() {
        return this.opts;
    }
    
    public void simulateActivity() {
        int requestRate = this.opts.getRequestRate();
        if (this.giveSpikeMode()) {
            requestRate *= (5 + this.rand.nextInt(10));
        }
        
        int nbRequests = this.giveWithUncertainty(requestRate, this.opts.getRequestRateUncertainty());
        for (int i = 0; i < nbRequests; i++) {
            String statusCode = this.giveStatusCode();
            String endpoint = this.giveEndpoint();
            this.httpRequestsTotal.labels(endpoint, statusCode).inc(); int latency = this.giveLatency(statusCode);
            if (this.spikeMode) {
                latency *= 2;
            }
            this.httpRequestDurationMs.labels(endpoint, statusCode).observe(latency);
        }        
    }

    public boolean giveSpikeMode() {
        switch (this.opts.getSpikeMode()) {
        case ON:
            this.spikeMode = true;
            break;
        case OFF:
            this.spikeMode = false;
            break;
        case RANDOM:
            int n = rand.nextInt(100);
            if (!this.spikeMode && n < this.opts.getSpikeStartChance()) {
                this.spikeMode = true;
            } else if (this.spikeMode && n < this.opts.getSpikeEndChance()) {
                this.spikeMode = false;
            }
            break;
        }

        return this.spikeMode;
    }
    
    public int giveWithUncertainty(int n, int u) {
        int delta = this.rand.nextInt(n * u / 50) - (n * u / 100);
        return n + delta;
    }
    
    public String giveStatusCode() {
        if (this.rand.nextInt(100) < this.opts.getErrorRate()) {
            return "500";
        } else {
            return "200";
        }
    }
    
    public String giveEndpoint() {
        int n = this.rand.nextInt(this.opts.getEndopints().length);
        return this.opts.getEndopints()[n];
    }
    
    public int giveLatency(String statusCode) {
        if (!"200".equals(statusCode)) {
            return 5 + this.rand.nextInt(50);
        }
        
        int p = this.rand.nextInt(100);
        
        if (p < 50) {
            return this.giveWithUncertainty(this.opts.getLatencyMin() + this.rand.nextInt(this.opts.getLatencyP50() - this.opts.getLatencyMin()), this.opts.getLatencyUncertainty());
        }
        if (p < 90) {
            return this.giveWithUncertainty(this.opts.getLatencyP50() + this.rand.nextInt(this.opts.getLatencyP90() - this.opts.getLatencyP50()), this.opts.getLatencyUncertainty());
        }
        if (p < 99) {
            return this.giveWithUncertainty(this.opts.getLatencyP90() + this.rand.nextInt(this.opts.getLatencyP99() - this.opts.getLatencyP90()), this.opts.getLatencyUncertainty());        
        }
        
        return this.giveWithUncertainty(this.opts.getLatencyP99() + this.rand.nextInt(this.opts.getLatencyMax() - this.opts.getLatencyP99()), this.opts.getLatencyUncertainty());
    }

    @Override
    public void run() {
        while(!shutdown) {
            System.out.println("Simulator is running...");
            this.simulateActivity();
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }
}

 SimulatorOpts

 

package io.spring2go.promdemo.httpsimulator;

import java.util.Arrays;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;

import com.fasterxml.jackson.annotation.JsonAutoDetect;

@Configuration
@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY)
public class SimulatorOpts {

    // Endpoints, Weighted map of endpoints to simulate
    @Value("${opts.endpoints}")
    private String[] endopints;
    
    // RequestRate, requests per second
    @Value("${opts.request_rate}")
    private int requestRate;
    
    // RequestRateUncertainty, Percentage of uncertainty when generating requests (+/-)
    @Value("${opts.request_rate_uncertainty}")
    private int requestRateUncertainty;
    
    // LatencyMin in milliseconds
    @Value("${opts.latency_min}")
    private int latencyMin;

    // LatencyP50 in milliseconds
    @Value("${opts.latency_p50}")
    private int latencyP50;
    
    // LatencyP90 in milliseconds
    @Value("${opts.latency_p90}")
    private int latencyP90;
    
    // LatencyP99 in milliseconds
    @Value("${opts.latency_p99}")
    private int latencyP99;
    
    // LatencyMax in milliseconds
    @Value("${opts.latency_max}")
    private int latencyMax;
    
    // LatencyUncertainty, Percentage of uncertainty when generating latency (+/-)
    @Value("${opts.latency_uncertainty}")
    private int latencyUncertainty;
    
    // ErrorRate, Percentage of chance of requests causing 500
    @Value("${opts.error_rate}")
    private int errorRate;
    
    // SpikeStartChance, Percentage of chance of entering spike mode
    @Value("${opts.spike_start_chance}")
    private int spikeStartChance;
    
    // SpikeStartChance, Percentage of chance of exiting spike mode
    @Value("${opts.spike_end_chance}")
    private int spikeEndChance;
    
    // SpikeModeStatus ON/OFF/RANDOM
    private SpikeMode spikeMode = SpikeMode.OFF;

    public String[] getEndopints() {
        return endopints;
    }

    public void setEndopints(String[] endopints) {
        this.endopints = endopints;
    }

    public int getRequestRate() {
        return requestRate;
    }

    public void setRequestRate(int requestRate) {
        this.requestRate = requestRate;
    }

    public int getRequestRateUncertainty() {
        return requestRateUncertainty;
    }

    public void setRequestRateUncertainty(int requestRateUncertainty) {
        this.requestRateUncertainty = requestRateUncertainty;
    }

    public int getLatencyMin() {
        return latencyMin;
    }

    public void setLatencyMin(int latencyMin) {
        this.latencyMin = latencyMin;
    }

    public int getLatencyP50() {
        return latencyP50;
    }

    public void setLatencyP50(int latencyP50) {
        this.latencyP50 = latencyP50;
    }

    public int getLatencyP90() {
        return latencyP90;
    }

    public void setLatencyP90(int latencyP90) {
        this.latencyP90 = latencyP90;
    }

    public int getLatencyP99() {
        return latencyP99;
    }

    public void setLatencyP99(int latencyP99) {
        this.latencyP99 = latencyP99;
    }

    public int getLatencyMax() {
        return latencyMax;
    }

    public void setLatencyMax(int latencyMax) {
        this.latencyMax = latencyMax;
    }

    public int getLatencyUncertainty() {
        return latencyUncertainty;
    }

    public void setLatencyUncertainty(int latencyUncertainty) {
        this.latencyUncertainty = latencyUncertainty;
    }

    public int getErrorRate() {
        return errorRate;
    }

    public void setErrorRate(int errorRate) {
        this.errorRate = errorRate;
    }

    public int getSpikeStartChance() {
        return spikeStartChance;
    }

    public void setSpikeStartChance(int spikeStartChance) {
        this.spikeStartChance = spikeStartChance;
    }

    public int getSpikeEndChance() {
        return spikeEndChance;
    }

    public void setSpikeEndChance(int spikeEndChance) {
        this.spikeEndChance = spikeEndChance;
    }

    public SpikeMode getSpikeMode() {
        return spikeMode;
    }

    public void setSpikeMode(SpikeMode spikeMode) {
        this.spikeMode = spikeMode;
    }

    @Override
    public String toString() {
        return "SimulatorOpts [endopints=" + Arrays.toString(endopints) + ", requestRate=" + requestRate
                + ", requestRateUncertainty=" + requestRateUncertainty + ", latencyMin=" + latencyMin + ", latencyP50="
                + latencyP50 + ", latencyP90=" + latencyP90 + ", latencyP99=" + latencyP99 + ", latencyMax="
                + latencyMax + ", latencyUncertainty=" + latencyUncertainty + ", errorRate=" + errorRate
                + ", spikeStartChance=" + spikeStartChance + ", spikeEndChance=" + spikeEndChance + ", spikeMode="
                + spikeMode + "]";
    }
    
}

 

SpikeMode

package io.spring2go.promdemo.httpsimulator;

public enum SpikeMode {
    
    OFF, ON, RANDOM
    
}

 

我們將應用運行起來

模擬一個簡單的HTTP微服務,生成Prometheus Metrics,可以Spring Boot方式運行

Metrics

運行時訪問端點:

http://SERVICE_URL:8080/prometheus

包括:

  • http_requests_total:請求計數器,endpointstatus為label
  • http_request_duration_milliseconds:請求延遲分布(histogram)

運行時options

Spike Mode

在Spike模式下,請求數會乘以一個因子(5~15),延遲加倍

Spike模式可以是onoff或者random, 改變方式:

# ON
curl -X POST http://SERVICE_URL:8080/spike/on

# OFF
curl -X POST http://SERVICE_URL:8080/spike/off

# RANDOM
curl -X POST http://SERVICE_URL:8080/spike/random

Error rate

缺省錯誤率1%,可以調整(0~100),方法:

# Setting error to 50%
curl -X POST http://SERVICE_URL:8080/error_rate/50

其它參數

配置在application.properties

opts.endpoints=/login, /login, /login, /login, /login, /login, /login, /users, /users, /users, /users/{id}, /register, /register, /logout, /logout, /logout, /logout
opts.request_rate=1000
opts.request_rate_uncertainty=70
opts.latency_min=10
opts.latency_p50=25
opts.latency_p90=150
opts.latency_p99=750
opts.latency_max=10000
opts.latency_uncertainty=70

opts.error_rate=1
opts.spike_start_chance=5
opts.spike_end_chance=30

運行時校驗端點:

http://SERVICE_URL:8080/opts

參考

https://github.com/PierreVincent/prom-http-simulator

 

我們在瀏覽器輸入http://localhost:8080/prometheus,我們可以查看收集到了metrie信息

# HELP http_requests_total Total number of http requests by response status code
# TYPE http_requests_total counter
http_requests_total{endpoint="/login",status="500",} 188.0
http_requests_total{endpoint="/register",status="500",} 55.0
http_requests_total{endpoint="/login",status="200",} 18863.0
http_requests_total{endpoint="/register",status="200",} 5425.0
http_requests_total{endpoint="/users/{id}",status="500",} 26.0
http_requests_total{endpoint="/users/{id}",status="200",} 2663.0
http_requests_total{endpoint="/logout",status="200",} 10722.0
http_requests_total{endpoint="/users",status="200",} 8034.0
http_requests_total{endpoint="/users",status="500",} 94.0
http_requests_total{endpoint="/logout",status="500",} 93.0
# HELP http_request_duration_milliseconds Http request latency histogram
# TYPE http_request_duration_milliseconds histogram
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="25.0",} 85.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="50.0",} 174.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="100.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="200.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="400.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="800.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="1600.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="+Inf",} 188.0
http_request_duration_milliseconds_count{endpoint="/login",status="500",} 188.0
http_request_duration_milliseconds_sum{endpoint="/login",status="500",} 5499.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="25.0",} 27.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="50.0",} 50.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="100.0",} 55.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="200.0",} 55.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="400.0",} 55.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="800.0",} 55.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="1600.0",} 55.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="500",le="+Inf",} 55.0
http_request_duration_milliseconds_count{endpoint="/register",status="500",} 55.0
http_request_duration_milliseconds_sum{endpoint="/register",status="500",} 1542.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="25.0",} 8479.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="50.0",} 11739.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="100.0",} 14454.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="200.0",} 17046.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="400.0",} 17882.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="800.0",} 18482.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="1600.0",} 18705.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="200",le="+Inf",} 18863.0
http_request_duration_milliseconds_count{endpoint="/login",status="200",} 18863.0
http_request_duration_milliseconds_sum{endpoint="/login",status="200",} 2552014.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="25.0",} 2388.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="50.0",} 3367.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="100.0",} 4117.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="200.0",} 4889.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="400.0",} 5136.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="800.0",} 5310.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="1600.0",} 5379.0
http_request_duration_milliseconds_bucket{endpoint="/register",status="200",le="+Inf",} 5425.0
http_request_duration_milliseconds_count{endpoint="/register",status="200",} 5425.0
http_request_duration_milliseconds_sum{endpoint="/register",status="200",} 739394.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="25.0",} 14.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="50.0",} 25.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="100.0",} 26.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="200.0",} 26.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="400.0",} 26.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="800.0",} 26.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="1600.0",} 26.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="500",le="+Inf",} 26.0
http_request_duration_milliseconds_count{endpoint="/users/{id}",status="500",} 26.0
http_request_duration_milliseconds_sum{endpoint="/users/{id}",status="500",} 752.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="25.0",} 1220.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="50.0",} 1657.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="100.0",} 2030.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="200.0",} 2383.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="400.0",} 2508.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="800.0",} 2608.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="1600.0",} 2637.0
http_request_duration_milliseconds_bucket{endpoint="/users/{id}",status="200",le="+Inf",} 2663.0
http_request_duration_milliseconds_count{endpoint="/users/{id}",status="200",} 2663.0
http_request_duration_milliseconds_sum{endpoint="/users/{id}",status="200",} 402375.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="25.0",} 4790.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="50.0",} 6634.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="100.0",} 8155.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="200.0",} 9609.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="400.0",} 10113.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="800.0",} 10493.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="1600.0",} 10622.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="200",le="+Inf",} 10722.0
http_request_duration_milliseconds_count{endpoint="/logout",status="200",} 10722.0
http_request_duration_milliseconds_sum{endpoint="/logout",status="200",} 1502959.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="25.0",} 3622.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="50.0",} 4967.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="100.0",} 6117.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="200.0",} 7254.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="400.0",} 7624.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="800.0",} 7866.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="1600.0",} 7966.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="200",le="+Inf",} 8034.0
http_request_duration_milliseconds_count{endpoint="/users",status="200",} 8034.0
http_request_duration_milliseconds_sum{endpoint="/users",status="200",} 1100809.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="25.0",} 41.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="50.0",} 88.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="100.0",} 94.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="200.0",} 94.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="400.0",} 94.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="800.0",} 94.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="1600.0",} 94.0
http_request_duration_milliseconds_bucket{endpoint="/users",status="500",le="+Inf",} 94.0
http_request_duration_milliseconds_count{endpoint="/users",status="500",} 94.0
http_request_duration_milliseconds_sum{endpoint="/users",status="500",} 2685.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="25.0",} 41.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="50.0",} 85.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="100.0",} 93.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="200.0",} 93.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="400.0",} 93.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="800.0",} 93.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="1600.0",} 93.0
http_request_duration_milliseconds_bucket{endpoint="/logout",status="500",le="+Inf",} 93.0
http_request_duration_milliseconds_count{endpoint="/logout",status="500",} 93.0
http_request_duration_milliseconds_sum{endpoint="/logout",status="500",} 2683.0

 我們來看具體的代碼

metric有Counter、Gauge、Histogram和Summary四種類型

this.httpRequestsTotal.labels(endpoint, statusCode).inc();這里是創建一個計數器,統計http請求的信息

對不同的http請求的端點,改端點下不同的響應都會進行記錄

http_requests_total{endpoint="/login",status="500",} 188.0
http_requests_total{endpoint="/register",status="500",} 55.0
http_requests_total{endpoint="/login",status="200",} 18863.0

= Histogram.build()
.name("http_request_duration_milliseconds")
.help("Http request latency histogram")
.exponentialBuckets(25, 2, 7)
.labelNames("endpoint", "status")
.register();

這里是創建一個直方圖,用來統計延遲數據分布

http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="25.0",} 85.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="50.0",} 174.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="100.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="200.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="400.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="800.0",} 188.0
http_request_duration_milliseconds_bucket{endpoint="/login",status="500",le="1600.0",} 188.0

 125.【實驗】Prometheus起步查詢實驗(中)~1.mp4

首先安裝普羅米修斯

接下來我們要讓普羅米修斯抓取我們上面的springboot http-simulation的數據http://localhost:8080/,需要修改prometheus.ym

# my global config
global:
  scrape_interval:     5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'http-simulation' metrics_path: /prometheus # scheme defaults to 'http'. static_configs: - targets: ['localhost:8080']    

 創建一個新的job,job的名稱是'http-simulation',業務metrics的路徑是/prometheus,對於的tartgets是localhost:8080'

 下面我們將普羅米修斯運行起來

我們使用git bash窗口將普羅米修斯運行起來。這里不要使用windows的cmd窗口

 

 pu

 普羅米修斯啟動的時候支撐熱加載

 

 啟用數據保存時間,配置刷新

./prometheus --storage.tsdb.retention.time=180d --web.enable-admin-api --web.enable-lifecycle --config.file=prometheus.yml

3)熱啟動

curl -XPOST http://localhost:9090/-/reload

 

 

 

啟動成功之后,瀏覽器輸入http://localhost:9090/graph

點擊查看target可以看到當前監控了哪些target

可以看到當前監控了哪些實例

我們要查看http-simulation的請求數目,如何做的統計了:http_requests_total{job="http-simulation"}

對於統計我們可以點擊上方的graph,查看一個圖形的一個統計,查看的時間是可以手動修改的

校驗http-simulator在1狀態

up{job="http-simulator"}
查詢http請求數

http_requests_total{job="http-simulator"}
查詢成功login請求數

http_requests_total{job="http-simulator", status="200", endpoint="/login"}
查詢成功請求數,以endpoint區分

http_requests_total{job="http-simulator", status="200"}
查詢總成功請求數

sum(http_requests_total{job="http-simulator", status="200"})
查詢成功請求率,以endpoint區分

rate(http_requests_total{job="http-simulator", status="200"}[5m])
查詢總成功請求率

sum(rate(http_requests_total{job="http-simulator", status="200"}[5m]))

 

 


http_requests_total{job="http-simulator", status="200", endpoint="/login"}
126.【實驗】Prometheus起步查詢實驗(下)~1.mp4
4. 延遲分布(Latency distribution)查詢
查詢http-simulator延遲分布

http_request_duration_milliseconds_bucket{job="http-simulator"}
查詢成功login延遲分布

http_request_duration_milliseconds_bucket{job="http-simulator", status="200", endpoint="/login"}
不超過200ms延遲的成功login請求占比

sum(http_request_duration_milliseconds_bucket{job="http-simulator", status="200", endpoint="/login", le="200.0"}) / sum(http_request_duration_milliseconds_count{job="http-simulator", status="200", endpoint="/login"})
成功login請求延遲的99百分位

histogram_quantile(0.99, rate(http_request_duration_milliseconds_bucket{job="http-simulator", status="200", endpoint="/log

 

127.【實驗】Prometheus + Grafana 展示實驗(上)~1.mp4

安裝grafana,下載之后解壓完成之后就可以了

運行grafana之前,保證普羅米修斯以及我們要監控的應用已經正常啟動起來了

我們使用git bash窗口運行

密碼是admin/admin

登錄成功之后,我們要給grafana設置普羅米修斯的數據源

直接點擊add data source

其他參數默認不做修改

添加Proemethes數據源

Name -> prom-datasource
Type -> Prometheus
HTTP URL -> http://localhost:9090
其它缺省即可

 http://localhost:9090是普羅米修斯運行的程序端口

Save & Test確保連接成功

3. 創建一個Dashboard
點擊**+圖標創建一個Dashbaord,點擊保存**圖標保存Dashboard,使用缺省Folder,給Dashboard起名為prom-demo。

 

 

4. 展示請求率

點擊Add panel圖標,點擊Graph圖標添加一個Graph,

點擊Graph上的Panel Title->Edit進行編輯

修改Title:General -> Title = Request Rate

設置Metrics

sum(rate(http_requests_total{job="http-simulator"}[5m]))

調整Lagend

  • 以表格展示As Table
  • 顯示Min/Max/Avg/Current/Total
  • 根據需要調整Axis

注意保存Dahsboard。

設置完成metries之后點擊下右邊的那個小三角

128.【實驗】Prometheus + Grafana 展示實驗(下)~1.mp4

5. 展示實時錯誤率

點擊Add panel圖標,點擊Singlestat圖標添加一個Singlestat,

點擊Graph上的Panel Title->Edit進行編輯

修改Title:General -> Title = Live Error Rate

設置Metrics

sum(rate(http_requests_total{job="http-simulator", status="500"}[5m])) / sum(rate(http_requests_total{job="http-simulator"}[5m]))

調整顯示單位unit:Options->Unit,設置為None->percent(0.0-1.0)

調整顯示值(目前為平均)為當前值(now):Options->Value->Stat,設置為Current

添加閥值和顏色:Options->Coloring,選中Value,將Threshold設置為0.01,0.05,表示

  • 綠色:0-1%
  • 橙色:1-5%
  • 紅色:>5%

添加測量儀效果:Options->Gauge,選中Show,並將Max設為1

添加錯誤率演變曲線:選中Spark lines -> Show

注意保存Dahsboard。

sum(rate(http_requests_total{job="http-simulation",status="500"}[5m])) /sum(rate(http_requests_total{job="http-simulation"}[5m]))

調整顯示單位unit:Options->Unit,設置為None->percent(0.0-1.0)

調整顯示值(目前為平均)為當前值(now):Options->Value->Stat,設置為Current

設置測量儀效果

6. 展示Top requested端點
點擊Add panel圖標,點擊Table圖標添加一個Table,

設置Metrics

sum(rate(http_requests_total{job="http-simulator"}[5m])) by (endpoint)
減少表中數據項,在Metrics下,選中Instant只顯示當前值

隱藏Time列,在Column Sytle下,Apply to columns named為Time,將Type->Type設置為Hidden

將Value列重命名,添加一個Column Style,Apply to columns named為Value,將Column Header設置為Requests/s

點擊表中的Requests/s header,讓其中數據根據端點活躍度進行排序。

注意調整Widget位置並保存Dahsboard。

sum(rate(http_requests_total{job="http-simulation"}[5m])) by (endpoint)

勾選instance只看當前存在的請求類型

點擊value可以按照值大小進行排序,讓當前請求最大的值在前面

這樣可以實時的查看當前的一個請求的統計。訪問最頻繁的五個端點信息等

 129.【實驗】Prometheus + Alertmanager 告警實驗(上)~1.mp4

注意啟用--web.enable-lifecycle,讓Prometheus支持通過web端點動態更新配置

接下來我們做這樣的一個功能,當我們之前運行的http-simulation這個應用掛了,我們發出一個告警

. HttpSimulatorDown告警
在Prometheus目錄下:

 

添加simulator_alert_rules.yml告警配置文件

groups:
- name: simulator-alert-rule
  rules:
  - alert: HttpSimulatorDown
    expr: sum(up{job="http-simulation"}) == 0
    for: 1m
    labels:
      severity: critical

 

一分鍾內統計 

 sum(up{job="http-simulation"})的值都是0,說明1分鍾內實例都沒有啟動,發出告警
修改prometheus.yml,引用simulator_alert_rules.yml文件
# my global config
global:
  scrape_interval:     5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "simulator_alert_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'http-simulation'
    metrics_path: /prometheus
    static_configs:
    - targets: ['localhost:8080']    

 

這樣說明配置已經成功,我們將http-simulation應用停止,一分鍾之后會觸發報警

狀態為firing表示告警已經觸發了

3. ErrorRateHigh告警

假設已經執行上面的步驟2,則重新運行Prometheus HTTP Metrics Simulator

simulator_alert_rules.yml文件中增加告警配置

- alert: ErrorRateHigh
    expr: sum(rate(http_requests_total{job="http-simulator", status="500"}[5m])) / sum(rate(http_requests_total{job="http-simulator"}[5m])) > 0.02
    for: 1m
    labels:
      severity: major
    annotations:
      summary: "High Error Rate detected"
      description: "Error Rate is above 2% (current value is: {{ $value }}"

 

整個文件的內容如下
groups:
- name: simulator-alert-rule
  rules:
  - alert: HttpSimulatorDown
    expr: sum(up{job="http-simulation"}) == 0
    for: 1m
    labels:
      severity: critical
  - alert: ErrorRateHigh
    expr: sum(rate(http_requests_total{job="http-simulator", status="500"}[5m])) / sum(rate(http_requests_total{job="http-simulator"}[5m])) > 0.02
    for: 1m
    labels:
      severity: major
    annotations:
      summary: "High Error Rate detected"
      description: "Error Rate is above 2% (current value is: {{ $value }}"      

130.【實驗】Prometheus + Alertmanager 告警實驗(下)~1.mp4

上面我們已經設置了告警,接下來當產生告警的時候,能夠發送郵件

首先要下載下載Alertmanager 0.15.2 for Windows,並解壓到本地目錄。

配置好郵箱地址之后,要啟動

啟動Alertmanager

./alertmanager.exe

在Prometheus目錄下,修改prometheus.yml配置Alertmanager地址,默認是9093端口

# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- localhost:9093

通過Prometheus->Status的Configuration和Rules確認配置和告警設置生效

通過Alertmanager UI界面和設置的郵箱,校驗ErrorRateHigh告警觸發

Alertmanager UI訪問地址:

http://localhost:9093

 

 

 

131.【實驗】Java 應用埋點和監控實驗~1.mp4

 

我們可以通過http api給隊列添加job,worker進行干活

實驗四、Java應用埋點和監控實驗
實驗步驟
1. Review和運行埋點樣例代碼
將instrumentation-example導入Eclipse IDE
Review代碼理解模擬任務系統原理和埋點方式
以Spring Boot方式運行埋點案例
通過http://localhost:8080/prometheus查看metrics

 

 

pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>io.spring2go.promdemo</groupId>
    <artifactId>instrumentation-example</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>instrumentation-example</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.17.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        
        <!-- The prometheus client -->
        <dependency>
            <groupId>io.prometheus</groupId>
            <artifactId>simpleclient_spring_boot</artifactId>
            <version>0.5.0</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>


</project>

 

InstrumentApplication
package io.spring2go.promdemo.instrument;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
import org.springframework.core.task.TaskExecutor;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

import io.prometheus.client.spring.boot.EnablePrometheusEndpoint;

@Controller
@SpringBootApplication
@EnablePrometheusEndpoint
public class InstrumentApplication {
    
    private JobQueue queue = new JobQueue();
    
    private WorkerManager workerManager;

    public static void main(String[] args) {

        SpringApplication.run(InstrumentApplication.class, args);
    }

    @RequestMapping(value = "/hello-world")
    public @ResponseBody String sayHello() {
        return "hello, world";
    }
    
    @RequestMapping(value = "/jobs", method = RequestMethod.POST) 
    public @ResponseBody String jobs() {
        queue.push(new Job());
        return "ok";
    }
    
    @Bean
    public TaskExecutor taskExecutor() {
        return new SimpleAsyncTaskExecutor();
    }

    @Bean
    public CommandLineRunner schedulingRunner(TaskExecutor executor) {
        return new CommandLineRunner() {
            public void run(String... args) throws Exception {
                // 10 jobs per worker
                workerManager = new WorkerManager(queue, 1, 4, 10);
                executor.execute(workerManager);
                System.out.println("WorkerManager thread started...");
            }
        };
    }

}

Job

package io.spring2go.promdemo.instrument;

import java.util.Random;
import java.util.UUID;

public class Job {
    
    private String id;
    
    private Random rand = new Random();
    
    public Job() {
        this.id = UUID.randomUUID().toString();
    }
    
    public void run() {
        try {
            // Run the job (5 - 15 seconds)
            Thread.sleep((5 + rand.nextInt(10)) * 1000);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }
    
}

 

JobQueue

package io.spring2go.promdemo.instrument;

import java.util.Queue;
import java.util.concurrent.LinkedBlockingQueue;

import io.prometheus.client.Gauge;

public class JobQueue {
    
    private final Gauge jobQueueSize = Gauge.build() .name("job_queue_size") .help("Current number of jobs waiting in queue") .register(); private Queue<Job> queue = new LinkedBlockingQueue<Job>();
    
    public int size() {
        return queue.size();
    }
    
    public void push(Job job) {
        queue.offer(job);
        jobQueueSize.inc();
    }
    
    public Job pull() {
        Job job = queue.poll();
        if (job != null) {
           jobQueueSize.dec();
        }
        return job;
    }
    
}

 

Worker

package io.spring2go.promdemo.instrument;

import java.util.UUID;

import io.prometheus.client.Histogram;

public class Worker extends Thread {
    
    private static final Histogram jobsCompletionDurationSeconds  = Histogram.build() .name("jobs_completion_duration_seconds") .help("Histogram of job completion time") .linearBuckets(4, 1, 16) .register(); private String id;
    
    private JobQueue queue;
    
    private volatile boolean shutdown;
        
    public Worker(JobQueue queue) {
        this.queue = queue;
        this.id = UUID.randomUUID().toString();
    }
    
    @Override
    public void run() {
        System.out.println(String.format("[Worker %s] Starting", this.id));
        while(!shutdown) {
            this.pullJobAndRun();
        }
        System.out.println(String.format("[Worker %s] Stopped", this.id));
    }
    
    public void shutdown() {
        this.shutdown = true;
        System.out.println(String.format("[Worker %s] Shutting down", this.id));
    }
    
    public void pullJobAndRun() {
        Job job = this.queue.pull();
        if (job != null) {
            long jobStart = System.currentTimeMillis();
            System.out.println(String.format("[Worker %s] Starting job: %s", this.id, job.getId()));
            job.run();
            System.out.println(String.format("[Worker %s] Finished job: %s", this.id, job.getId()));
            int duration = (int)((System.currentTimeMillis() - jobStart) / 1000);
           jobsCompletionDurationSeconds.observe(duration);
        } else {
            System.out.println(String.format("[Worker %s] Queue is empty. Backing off 5 seconds", this.id));
            try {
                Thread.sleep(5 * 1000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }

}

 

WorkerManager

package io.spring2go.promdemo.instrument;

import java.util.LinkedList;
import java.util.Queue;

public class WorkerManager extends Thread {
    
    private Queue<Worker> workers = new LinkedList<Worker>();
    
    private JobQueue queue;
    
    private int minWorkers;
    private int maxWorkers;
    
    private int jobsWorkerRatio;
    
    public WorkerManager(JobQueue queue, int minWorkers, int maxWorkers, int jobsWorkerRatio) {
        this.queue = queue;
        this.minWorkers = minWorkers;
        this.maxWorkers = maxWorkers;
        this.jobsWorkerRatio = jobsWorkerRatio;
        
        // Initialize workerpool
        for (int i = 0; i < minWorkers; i++) {
            this.addWorker();
        }
    }
    
    public void addWorker() {
        Worker worker = new Worker(queue);
        this.workers.offer(worker);
        worker.start();
    }
    
    public void shutdownWorker() {
        if (this.workers.size() > 0) {
            Worker worker = this.workers.poll();
            worker.shutdown();
        }
    }
    
    public void run() {
        this.scaleWorkers();
    }
    
    public void scaleWorkers() {
        while(true) {
            int queueSize = this.queue.size();
            int workerCount = this.workers.size();
            
            if ((workerCount + 1) * jobsWorkerRatio < queueSize && workerCount < this.maxWorkers) {
                System.out.println("[WorkerManager] Too much work, starting extra worker.");
                this.addWorker();
            }
            
            if ((workerCount - 1) * jobsWorkerRatio > queueSize && workerCount > this.minWorkers) {
                System.out.println("[WorkerManager] Too much workers, shutting down 1 worker");
                this.shutdownWorker();
            }
            
            try {
                Thread.sleep(10 * 1000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }

}

application.properties

management.security.enabled=false

 

我們要監聽兩個指標:第一job隊列的大小,二是隊列中每個任務執行的時間
job的大小我們在job中添加監聽,從隊列中post job的時候隊列大小增加1,取出減少1

2. 配置和運行Promethus
添加針對instrumentation-example的監控job

- job_name: 'instrumentation-example'
metrics_path: /prometheus
static_configs:
- targets: ['localhost:8080']
運行Prometheus

./prometheus.exe
通過Prometheus->Status的configuration和targets校驗配置正確

3. 生成測試數據和查詢Metrics
查詢instrumentation-example在UP1狀態

up{job="instrumentation-example"}
運行queueUpJobs.sh產生100個job

./queueUpJobs.sh
查詢JobQueueSize變化曲線(調整時間范圍到5m):

job_queue_size{job="instrumentation-example"}
查詢90分位Job執行延遲分布:

histogram_quantile(0.90, rate(jobs_completion_duration_seconds_bucket{job="instrumentation-example"}[5m]))

132.【實驗】NodeExporter 系統監控實驗~1.mp4

1. 下載和運行wmi-exporter
下載wmi_exporter-amd64,並解壓到本地目錄

 

校驗metrics端點

http://localhost:9182/metrics

2. 配置和運行Promethus
在Prometheus安裝目錄下

在prometheus.yml 中添加針對wmi-exporter的監控job

# my global config
global:
  scrape_interval:     5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "simulator_alert_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
 # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'wmi-exporter' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9182'] 

 

3. Grafana Dashboard for wmi-exporter
在Grafana安裝目錄下啟動Grafana服務器

./bin/grafana-server.exe
登錄Grafana UI(admin/admin)

http://localhost:3000
通過Grafana的**+**圖標導入(Import) wmi-exporter dashboard:

grafana id = 2129
注意選中prometheus數據源
查看Windows Node dashboard。

 

4. 參考

Grafana Dashboard倉庫

https://grafana.com/dashboards

 

然后輸入2129,會自動導入變成下面的信息

 

這里數據源要選擇普羅米修斯數據源

133.【實驗】Spring Boot Actuator 監控實驗~1.mp4

實驗步驟
1. 運行Spring Boot + Actuator
將actuatordemo應用導入Eclipse IDE

Review actuatordemo代碼

以Spring Boot方式運行actuatordemo

校驗metrics端點

http://localhost:8080/prometheus
2. 配置和運行Promethus
在Prometheus安裝目錄下

在prometheus.yml 中添加針對wmi-exporter的監控job

- job_name: 'actuator-demo'
metrics_path: '/prometheus'
static_configs:
- targets: ['localhost:8080']
運行Prometheus

./prometheus.exe
訪問Prometheus UI

http://localhost:9090
通過Prometheus->Status的configuration和targets校驗配置正確

通過Prometheus->Graph查詢actuator-demo在UP=1狀態

up{job="actuatordemo"}
3. Grafana Dashboard for JVM (Micrometer)
在Grafana安裝目錄下啟動Grafana服務器

./bin/grafana-server.exe
登錄Grafana UI(admin/admin)

http://localhost:3000
通過Grafana的**+**圖標導入(Import) JVM (Micrometer) dashboard:

grafana id = 4701
注意選中prometheus數據源
查看JVM (Micormeter) dashboard。

4. 參考
Grafana Dashboard倉庫

https://grafana.com/dashboards
Micrometer Prometheus支持

https://micrometer.io/docs/registry/prometheus
Micrometer Springboot 1.5支持

https://micrometer.io/docs/ref/spring/1.5

 133.【實驗】Spring Boot Actuator 監控實驗~1.mp4

整個程序的代碼如下

首先我們來看下代碼

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>io.spring2go.promdemo</groupId>
    <artifactId>actuatordemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>actuatordemo</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.17.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
          <groupId>io.micrometer</groupId>
          <artifactId>micrometer-spring-legacy</artifactId>
          <version>1.0.6</version>
        </dependency>
        
        <dependency>
          <groupId>io.micrometer</groupId>
          <artifactId>micrometer-registry-prometheus</artifactId>
          <version>1.0.6</version>
        </dependency>        
        
        <dependency>
            <groupId>io.github.mweirauch</groupId>
            <artifactId>micrometer-jvm-extras</artifactId>
            <version>0.1.2</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>


</project>

 

在自己本地電腦上建立一個Prometheus和Grafana儀表盤,用來可視化監控Spring Boot應用產生的所有metrics。

Spring Boot使用Micrometer,一個應用metrics組件,將actuator metrics整合到外部監控系統中。
它支持很多種監控系統,比如Netflix Atalas, AWS Cloudwatch, Datadog, InfluxData, SignalFx, Graphite, Wavefront和Prometheus等。
為了整合Prometheus,你需要增加micrometer-registry-prometheus依賴:

在SpringBoot2.X中,spring-boot-starter-actuator引入了io.micrometer,對1.X中的metrics進行了重構,主要特點是支持tag/label,配合支持tag/label的監控系統,使得我們可以更加方便地對metrics進行多維度的統計查詢及監控
spring-boot-starter-actuator,主要是提供了Prometheus端點,不用重復造輪子。 

ActuatordemoApplication

package io.spring2go.promdemo.actuatordemo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.spring.autoconfigure.MeterRegistryCustomizer;

@SpringBootApplication
@Controller
public class ActuatordemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(ActuatordemoApplication.class, args);
    }
    
    @RequestMapping(value = "/hello-world")
    public @ResponseBody String sayHello() {
        return "hello, world";
    }
    
 @Bean MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() { return registry -> registry.config().commonTags("application", "actuator-demo"); }
}

 

上面要設置注冊功能

commonTags必須叫做application

application.properties

endpoints.sensitive=false

 

訪問actouor端點的時候不需要用戶名和密碼

接下來我們啟動應用

 

如果要收到埋點,使用下面的micrometer提供的方法在springboot應用中進行埋點就可以了

 

 
         

# my global config
global:
scrape_interval: 5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).

 
         

# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093

 
         

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
#- "simulator_alert_rules.yml"
# - "second_rules.yml"

 
         

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'actuator-demo'
metrics_path: /prometheus
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

 
         

static_configs:
- targets: ['localhost:8080']

 
        

 

這里代碼  - job_name: 'actuator-demo'要和代碼  return registry -> registry.config().commonTags("application", "actuator-demo");中的對應起來
這樣整個監控就啟動起來了

 

 

jvm dashboard對於的id為4701

添加4701,數據源一定要選擇普羅米修斯的數據源

 

134.Prometheus 監控最佳實踐~1.mp4

例如http的請求數目,http請求的平均演示,http500的錯誤數目。cpu、內存、磁盤的使用情況等

online為請求響應的系統:前台的web系統,請求、錯誤、db、緩存、請求延遲等進行跟蹤

offline serving system:任務隊列系統 隊里工作線程的利用率、任務線程執行的情況

Batch Jobs:批處理系統,普羅米修斯到網關上拉去數據

普羅米修斯的高可用

 

 

 


普羅米修斯默認只支持15天的數據,超過整個范圍的數據需要單獨做處理

 

135.主流開源時序數據庫比較~1.mp4

137.微服務監控體系總結~1.mp4

系統層監控:底層硬件,操作系統的監控,普羅米修斯做系統操作系統的監控

應用層的監控:消息隊列redis、mysql spingboot 普羅米修斯進行埋點,cat skywalking做應用層的監控 elk可以統計系統的日志做監控

 

業務層:主要桶業務進行統計,例如轉賬等業務指標,普羅米修斯就可以搞定

端用戶體驗:客戶訪問網頁的時間,聽雲等

 

 微服務架構的監控體系

左邊是對一個微服務架構做監控

主要的監控類別有下面的三種:log、Trace、Metrics

log:微服務產生log之后使用logstash進行日志的收集、然后使用kafka隊列做緩沖,保證消息不被丟失,然后存儲到es數據庫中

trace:追蹤分布式鏈路調用的場景

metris:在微服務里面進行埋點,抓取metris數據,然后做告警等操作

普羅米修斯也可以對kafka、cat等中間件進行監控,然后進行告警操作

 

 

 

 

 


 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM