prometheus+grafana+nodeExporter監控服務器的表現


數據存儲方:prometheus 時序數據庫用來做數據收集;

數據發送方:nodeExporter 用來將日志打到promexxxxx上;

數據展示方:grafana用來做數據的展示;

數據報警方:alert Manager(這里沒搞)

 

1.wget nodeExporter 到本地 ,解壓后啟動

 wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz --no-check-certificate
 tar -xf node_exporter-0.18.1.linux-amd64.tar.gz
  ./node_exporter

 

如果服務是通過docker啟動的,cp文件到對應容器后啟動

docker cp ../node_exporterxxxxxx <dockername>:/    

  

啟動后訪問 ip:9100/metrtics

node_cpu:系統CPU占用

node_disk*:磁盤io

node_filesystem*:文件系統用量

node_load1:系統負載

node_memory*:內存使用量

node_network*:網絡帶寬

node_time:當前系統時間

go_*:node exporter中go相關指標

process_*:node exporter自身進程相關運行指標 

 

 

 

2.編寫prometheus的yml文件,啟動docker的時候加載該yml配置文件。

vi prometheus.yml

nodeexporter 一定要裝在服務器上。這里9100端口是我本地啟動的mall-portal服務暴露的端口。

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # Evaluate rules every 15 seconds.

scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['localhost:9090']
        labels:
          instance: prometheus

  - job_name: linux
    static_configs:
      - targets: ['47.112.188.174:9100']
        labels:
          instance: node
  - job_name: 'spring'
    static_configs:
	metrics_path: '/actuator/prometheus'
      - targets: ['47.112.188.174:8081']
  - job_name: consul
    consul_sd_configs:
      - server: ['47.112.188.174:8500']
	    services: []
    relabel_configs:
      - source_labels: [__meta_consul_tags]
        regex: .*mall.*
        action: keep
	  
		  
		  
		  

  

啟動prometheus

docker run --name prometheus -d -p 9090:9090 --privileged=true -v /usr/local/dockerdata/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml

 

可在9090端口查看prometheus的數據

http://47.112.188.174:9090/targets

 

 

 

http://47.112.188.174:9090/graph 里面可以查詢

 

 

 

 

grafana 添加prometheus數據后查看

 

 

global:  scrape_interval:     15s # By default, scrape targets every 15 seconds.  evaluation_interval: 15s # Evaluate rules every 15 seconds.
scrape_configs:  - job_name: prometheus    static_configs:      - targets: ['localhost:9090']        labels:          instance: prometheus
  - job_name: linux    static_configs:      - targets: ['47.112.188.174:9100']        labels:          instance: node  - job_name: 'spring'    static_configs:metrics_path: '/actuator/prometheus'      - targets: ['47.112.188.174:8081']  - job_name: consul    consul_sd_configs:      - server: ['47.112.188.174:8500']    services: []    relabel_configs:      - source_labels: [__meta_consul_tags]        regex: .*mall.*        action: keep        

global:  scrape_interval:     15s # By default, scrape targets every 15 seconds.  evaluation_interval: 15s # Evaluate rules every 15 seconds.
scrape_configs:  - job_name: prometheus    static_configs:      - targets: ['localhost:9090']        labels:          instance: prometheus
  - job_name: linux    static_configs:      - targets: ['47.112.188.174:9100']        labels:          instance: node  - job_name: 'spring'    static_configs:metrics_path: '/actuator/prometheus'      - targets: ['47.112.188.174:8081']  - job_name: consul    consul_sd_configs:      - server: ['47.112.188.174:8500']    services: []    relabel_configs:      - source_labels: [__meta_consul_tags]        regex: .*mall.*        action: keep        


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM