安裝監控插件
wget https://github.com/justwatchcom/elasticsearch_exporter/releases/download/v1.0.4rc1/elasticsearch_exporter-1.0.4rc1.linux-amd64.tar.gz tar -zxvf elasticsearch_exporter-1.0.4rc1.linux-amd64.tar.gz cd elasticsearch_exporter-1.0.4rc1.linux-amd64/ nohup ./elasticsearch_exporter --web.listen-address ":9109" --es.uri http://192.168.50.153:9200 &
啟動成功后,可以訪問 http://192.168.50.153:9109/metrics ,看抓取的信息
監控圖表
指標 |
解析 |
---|---|
##搜索和索引性能 |
|
elasticsearch_indices_search_query_total |
查詢總數 吞吐量 |
elasticsearch_indices_search_query_time_seconds |
查詢總時間 性能 |
elasticsearch_indices_search_fetch_total |
提取總數 |
elasticsearch_indices_search_fetch_time_seconds |
花費在提取上的總時間 |
##索引請求 |
|
elasticsearch_indices_indexing_index_total |
索引的文件總數 |
elasticsearch_indices_indexing_index_time_seconds_total |
索引文檔總時間 |
elasticsearch_indices_indexing_delete_total |
索引的文件刪除總數 |
elasticsearch_indices_indexing_delete_time_seconds_total |
索引的文件刪除總時間 |
elasticsearch_indices_refresh_total |
索引刷新總數 |
elasticsearch_indices_refresh_time_seconds_total |
刷新指數的總時間 |
elasticsearch_indices_flush_total |
索引刷新總數到磁盤 |
elasticsearch_indices_flush_time_seconds |
將索引刷新到磁盤上的總時間 累計flush時間 |
##JVM內存和垃圾回收 |
|
elasticsearch_jvm_gc_collection_seconds_sum |
GC run time in seconds垃圾回收時間 |
elasticsearch_jvm_gc_collection_seconds_count |
Count of JVM GC runs垃圾搜集數 |
elasticsearch_jvm_memory_committed_bytes |
JVM memory currently committed by area最大使用內存限制 |
elasticsearch_jvm_memory_max_bytes |
配置的最大jvm值 |
elasticsearch_jvm_memory_pool_max_bytes |
JVM內存最大池數 |
elasticsearch_jvm_memory_pool_peak_max_bytes |
最大的JVM內存峰值 |
elasticsearch_jvm_memory_pool_peak_used_bytes |
池使用的JVM內存峰值 |
elasticsearch_jvm_memory_pool_used_bytes |
目前使用的JVM內存池 |
elasticsearch_jvm_memory_used_bytes |
JVM memory currently used by area 內存使用量 |
##集群健康和節點可用性 |
|
elasticsearch_cluster_health_status |
集群狀態,green( 所有的主分片和副本分片都正常運行)、yellow(所有的主分片都正常運行,但不是所有的副本分片都正常運行)red(有主分片沒能正常運行)值為1的即為對應狀態 |
elasticsearch_cluster_health_number_of_data_nodes |
node節點的數量 |
elasticsearch_cluster_health_number_of_in_flight_fetch |
正在進行的碎片信息請求的數量 |
elasticsearch_cluster_health_number_of_nodes |
集群內所有的節點 |
elasticsearch_cluster_health_number_of_pending_tasks |
尚未執行的集群級別更改 |
elasticsearch_cluster_health_initializing_shards |
正在初始化的分片數 |
elasticsearch_cluster_health_unassigned_shards |
未分配分片數 |
elasticsearch_cluster_health_active_primary_shards |
活躍的主分片總數 |
elasticsearch_cluster_health_active_shards |
活躍的分片總數(包括復制分片) |
elasticsearch_cluster_health_relocating_shards |
當前節點正在遷移到其他節點的分片數量,通常為0,集群中有節點新加入或者退出時該值會增加 |
##資源飽和度 |
|
elasticsearch_thread_pool_completed_count |
線程池操作完成(bulk、index、search、force_merge) |
elasticsearch_thread_pool_active_count |
線程池線程活動(bulk、index、search、force_merge) |
elasticsearch_thread_pool_largest_count |
線程池最大線程數(bulk、index、search、force_merge) |
elasticsearch_thread_pool_queue_count |
線程池中的排隊線程數(bulk、index、search、force_merge) |
elasticsearch_thread_pool_rejected_count |
線程池的被拒絕線程數(bulk、index、search、force_merge) |
elasticsearch_indices_fielddata_memory_size_bytes |
fielddata緩存的大小(字節) |
elasticsearch_indices_fielddata_evictions |
來自fielddata緩存的驅逐次數 |
elasticsearch_indices_filter_cache_evictions |
來自過濾器緩存的驅逐次數(僅版本2.x) |
elasticsearch_indices_filter_cache_memory_size_bytes |
過濾器高速緩存的大小(字節)(僅版本2.x) |
elasticsearch_cluster_health_number_of_pending_tasks |
待處理任務數 |
elasticsearch_indices_get_time_seconds |
|
elasticsearch_indices_get_missing_total |
丟失的文件的GET請求總數 |
elasticsearch_indices_get_missing_time_seconds |
花費在文檔丟失的GET請求上的總時間 |
elasticsearch_indices_get_exists_time_seconds |
|
elasticsearch_indices_get_exists_total |
|
elasticsearch_indices_get_total |
|
##主機級別的系統和網絡指標 |
|
elasticsearch_process_cpu_percent |
Percent CPU used by process CPU使用率 |
elasticsearch_filesystem_data_free_bytes |
Free space on block device in bytes 磁盤可用空間 |
elasticsearch_process_open_files_count |
Open file descriptors ES進程打開的文件描述符 |
elasticsearch_transport_rx_packets_total |
Count of packets receivedES節點之間網絡入流量 |
elasticsearch_transport_tx_packets_total |
Count of packets sentES節點之間網絡出流量 |
prometheus配置
- job_name: 'elasticsearch' scrape_interval: 60s scrape_timeout: 30s metrics_path: "/metrics" static_configs: - targets: - '192.168.50.153:9109' labels: service: elasticsearch relabel_configs: - source_labels: [__address__] regex: '(.*)\:9109' target_label: 'instance' replacement: '$1' - source_labels: [__address__] regex: '.*\.(.*)\.lan.*' target_label: 'environment' replacement: '$1'
之后運行重讀prometheus配置命令
./reload-prometheus.sh
grafana模板
https://grafana.com/dashboards/2322
報警配置
groups: - name: elasticsearchStatsAlert rules: - alert: Elastic_Cluster_Health_RED expr: elasticsearch_cluster_health_status{color="red"}==1 for: 1m labels: severity: critical annotations: summary: "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}" description: "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}." - alert: Elastic_Cluster_Health_Yellow expr: elasticsearch_cluster_health_status{color="yellow"}==1 for: 1m labels: severity: critical annotations: summary: " Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}" description: "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}." - alert: Elasticsearch_JVM_Heap_Too_High expr: elasticsearch_jvm_memory_used_bytes{area="heap"} / elasticsearch_jvm_memory_max_bytes{area="heap"} > 0.8 for: 1m labels: severity: critical annotations: summary: "ElasticSearch node {{ $labels.instance }} heap usage is high " description: "The heap in {{ $labels.instance }} is over 80% for 15m." - alert: Elasticsearch_health_up expr: elasticsearch_cluster_health_up !=1 for: 1m labels: severity: critical annotations: summary: " ElasticSearch node: {{ $labels.instance }} last scrape of the ElasticSearch cluster health failed" description: "ElasticSearch node: {{ $labels.instance }} last scrape of the ElasticSearch cluster health failed" - alert: Elasticsearch_Too_Few_Nodes_Running expr: elasticsearch_cluster_health_number_of_nodes < 12 for: 1m labels: severity: critical annotations: summary: "There are only {{$value}} < 12 ElasticSearch nodes running " description: "lasticSearch running on less than 12 nodes(total 14)" - alert: Elasticsearch_Count_of_JVM_GC_Runs expr: rate(elasticsearch_jvm_gc_collection_seconds_count{}[5m])>5 for: 1m labels: severity: critical annotations: summary: "ElasticSearch node {{ $labels.instance }}: Count of JVM GC runs > 5 per sec and has a value of {{ $value }} " description: "ElasticSearch node {{ $labels.instance }}: Count of JVM GC runs > 5 per sec and has a value of {{ $value }}" - alert: Elasticsearch_GC_Run_Time expr: rate(elasticsearch_jvm_gc_collection_seconds_sum[5m])>0.3 for: 1m labels: severity: critical annotations: summary: " ElasticSearch node {{ $labels.instance }}: GC run time in seconds > 0.3 sec and has a value of {{ $value }}" description: "ElasticSearch node {{ $labels.instance }}: GC run time in seconds > 0.3 sec and has a value of {{ $value }}" - alert: Elasticsearch_json_parse_failures expr: elasticsearch_cluster_health_json_parse_failures>0 for: 1m labels: severity: critical annotations: summary: " ElasticSearch node {{ $labels.instance }}: json parse failures > 0 and has a value of {{ $value }}" description: "ElasticSearch node {{ $labels.instance }}: json parse failures > 0 and has a value of {{ $value }}" - alert: Elasticsearch_breakers_tripped expr: rate(elasticsearch_breakers_tripped{}[5m])>0 for: 1m labels: severity: critical annotations: summary: " ElasticSearch node {{ $labels.instance }}: breakers tripped > 0 and has a value of {{ $value }}" description: "ElasticSearch node {{ $labels.instance }}: breakers tripped > 0 and has a value of {{ $value }}" - alert: Elasticsearch_health_timed_out expr: elasticsearch_cluster_health_timed_out>0 for: 1m labels: severity: critical annotations: summary: " ElasticSearch node {{ $labels.instance }}: Number of cluster health checks timed out > 0 and has a value of {{ $value }}" description: "ElasticSearch node {{ $labels.instance }}: Number of cluster health checks timed out > 0 and has a value of {{ $value }}"
elasticsearch-7.x監控
Since Elasticsearch 7.0.0 : ./bin/elasticsearch-plugin install -b https://github.com/vvanholl/elasticsearch-prometheus-exporter/releases/download/7.2.1.0/prometheus-exporter-7.2.1.0.zip Since Elasticsearch 6.0.0 : ./bin/elasticsearch-plugin install -b https://github.com/vvanholl/elasticsearch-prometheus-exporter/releases/download/6.8.0.0/prometheus-exporter-6.8.0.0.zip On Elasticsearch 5.x.x : ./bin/elasticsearch-plugin install -b https://github.com/vvanholl/elasticsearch-prometheus-exporter/releases/download/5.6.16.0/elasticsearch-prometheus-exporter-5.6.16.0.zip On old 2.x.x versions : ./bin/plugin install https://github.com/vvanholl/elasticsearch-prometheus-exporter/releases/download/2.4.1.0/elasticsearch-prometheus-exporter-2.4.1.0.zip Do not forget to restart the node after the installation! Note that the plugin needs the following special permissions: java.lang.RuntimePermission accessClassInPackage.sun.misc java.lang.RuntimePermission accessDeclaredMembers java.lang.reflect.ReflectPermission suppressAccessChecks If you have a lot of indices and think this data is irrelevant, you can disable in the main configuration file: prometheus.indices: false To disable exporting cluster settings use: prometheus.cluster.settings: false
Uninstall Since Elasticsearch 6.0.0: ./bin/elasticsearch-plugin remove prometheus-exporter On Elasticsearch 5.x.x: ./bin/elasticsearch-plugin remove prometheus-exporter On old 2.x.x versions: ./bin/plugin remove prometheus-exporter
prometheus配置
- job_name: elasticsearch scrape_interval: 10s metrics_path: "/_prometheus/metrics" static_configs: - targets: - node1:9200 - node2:9200 - node3:9200
帶賬號密碼:
- job_name: 'elastic-cluster' scrape_interval: 10s metrics_path: '/_prometheus/metrics' static_configs: - targets: - 'node1:9200' - 'node2:9200' - 'node3:9200' basic_auth: username: 'elastic' password: 'elastic'
grafana
https://grafana.com/grafana/dashboards/266