好記性不如爛筆頭,記下來便人便己:
1、搭建Prometheus的容器:
docker run -d --name prometheus -p 9090:9090 --network grafana -v $PWD/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
其中:
①prometheus.yml文件的默認配置可以從容器中掛載出來一份,也可以直接修改。我要訪問的的服務在本機。這里有個坑,就是在容器里面要訪問宿主機的ip的話,不能寫localhost。方法是:要么將容器共享宿主機網絡,這個不建議;要么就直接寫宿主機的內網ip即可。
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself.
scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: 'python-modules' static_configs: - targets: ['ip:80']
②創建一個grafana共享網絡,使之與grafana容器處於同一網絡空間
2、啟動grafana容器
docker run -d --name grafana --network grafana -p 3000:3000 grafana/grafana:6.6.2
3、 在grafana里面配置好Prometheus的數據源,即dashboard即可。完成后的樣例如下: