logstash寫日志elaticsearch不響應


  在大量的解析日志並寫入elasticsearch,在后端節點數據數量及磁盤性能等影響下,es不響應

問題描述:

[2018-04-12T17:02:16,861][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://x.x.x.x:9200/, :error_message=>"Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Attempted to send a bulk request  to elasticsearch, but no there are no living connections in the connection pool

解決辦法:

  1)you should run logstash separate from ES cluster as both can use a lot of cpu resources. //lg與es分離,禁止放到一台機器上,lg解析消耗大量的CPU
  2)You should also have more than one node for ES cluster in which case logstash can use the other ES nodes when one node is not accessible //增加es數量-data節點的

  03)緩存日志隊列換成kafka,控制消費隊列,讓elasticsearch穩定寫入

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM