我們使用Prometheus + Redis Exporter來實現對Redis的監控;由於采集數據本身時不占用資源,如果每一個Redis Server都使用一個Exporter,則會造成資源的嚴重浪費;
針對以下問題:
1. Redis本身有密碼,避免密碼的明文傳輸,將密碼信息采用Apollo進行管理
2. 需要支持動態傳入需要監控的Redis
3. 公用同一個Redis Exporter
所以,我們針對Redis Exporter做了如下簡單的調整;
監控調整:
具體部署設置:
第一步: 先配置Apollo:
將需要監控的Redis密碼信息配置到Apollo中【如果沒有密碼,可以不用進行配置】
{ "redis://a.abc.com:6385": "password", "redis://a.abc.com:6378": "", "redis://a.abc.com:6379": "" }
第二步: 部署Redis Exporter [部署在k8s中]
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: monitoring-redis name: "redis-one-multi" labels: name: redis-one-multi spec: replicas: 3 selector: matchLabels: name: redis-one-multi template: metadata: labels: name: redis-one-multi spec: containers: - name: redis-one-multi image: " #harbor/image/redis_exporter:v1.5.9" command: - /redis_exporter args: - --redis.addr="" - --redis-only-metrics env: - name: APOLLO_CONFIG_SERVER value: "http://apollo-config.system-service.domain.com" - name: APP_ID value: "redis-exporter-go" - name: APOLLO_CLUSTER value: "default" - name: APOLLO_NS_CONFIG value: "redis-pwd.json" resources: requests: cpu: 1000m memory: 1024Mi limits: cpu: 1000m memory: 1024Mi ports: - name: http containerPort: 9121
第三步:針對監控的Redis Server指定
## config for the multiple Redis targets that the exporter will scrape - job_name: 'redis_exporter_targets' metrics_path: /scrape scrape_interval: '15s' scrape_timeout: '15s' scheme: 'http' static_configs: - targets: - redis://a.abc.com:6385 - redis://a.abc.com:6378 - redis://a.abc.com:6379 relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: 127.0.0.1:9121
【自我檢測】
通過http請求時,會根據target到配置文件中匹配到密碼,進行請求; http://localhost:9121/scrape?target=redis://a.abc.com:6385
項目介紹:
https://github.com/schangech/redis_exporter