詳述:
1 _riverStatus Import_fail
問題描述: 發現有個索引的數據同步不完整,在 http://192.168.1.17:9200/_plugin/head/ 在browse - river里看到 _riverStatus Import_fail
查看 elasticsearch 的log發現 有幾條數據由於異常造成同步失敗,處理好數據好重新建索引數據同步正常。
2 es_rejected_execution_exception <429>
此異常主要是因為請求數過多,es的線程池不夠用了。
默認bulk thead pool set queue capacity =50 這個可以設置大點
打開 elasticsearch.yml 在末尾加上
threadpool:
bulk:
type: fixed
size: 60
queue_size: 1000
重新啟動服務即可
另:
--查看線程池設置--
curl -XGET "http://localhost:9200/_nodes/thread_pool/"
非bulk入庫thread pool設置可以這樣修改
threadpool:
index:
type: fixed
size: 30
queue_size: 1000
關於該異常,有網友給的解釋挺好的:
Elasticsearch has a thread pool and a queue for search per node. A thread pool will have N number of workers ready to handle the requests. When a request comes and if a worker is free , this is handled by the worker. Now by default the number of workers is equal to the number of cores on that CPU. When the workers are full and there are more search requests , the request will go to queue. The size of queue is also limited. Its by default size is say 100 and if there happens more parallel requests than this , then those requests would be rejected as you can see in the error log. The solution to this would be to - 1. Increase the size of queue or threadpool - The immediate solution for this would be to increase the size of the search queue. We can also increase the size of threadpool , but then that might badly effect the performance of individual queries. So increasing the queue might be a good idea. But then remember that this queue is memory residential and increasing the queue size too much can result in Out Of Memory issues. You can get more info on the same here. 2. Increase number of nodes and replicas - Remember each node has its own search threadpool/queue. Also search can happen on primary shard OR replica.
關於thread pool,可參看:https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
3 create_failed_engine_exception <500>
相關分片損壞.
刪除該分片重建即可。
4 mapper_parsing_exception <400>
存在字段格式不正確,與mapping不匹配。
檢查文檔字段格式,格式不正確有兩種情況,其一是格式與mapping不匹配,其二是對字符串而言,可能字符非法。
5 index_not_found_exception <404>
索引不存在。
建立索引。
6 Result window is too large, from + size must be less than or equal to: [10000] but was [10000000].
result window的值默認為10000,比實際需求的小,故而報錯。
兩個方法:其一,在elasticsearch.yml中設置index.max_result_window,也可以直接修改某索引的settings:
curl -XPUT http://127.0.0.1:9200/indexname/_settings -d '{ "index" : { "max_result_window" : 100000000}}'
其二,使用scroll api。
POST /twitter/tweet/_search?scroll=1m { "size": 100, "query": { "match" : { "title" : "elasticsearch" } } }
從服務器響應獲取scroll_id,然后后面批次的結果可通過循環使用下面語句得到:
POST /_search/scroll { "scroll" : "1m", "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" }
關於scroll api,可參看:https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html
7 illegal_argument_exception: number of documents in the index cannot exceed 2147483519 <400>
分片上文檔數達到20億上限,無法插入新文檔。
重建索引,增加分片;也可以增加節點。
8 action_request_validation_exception: Validation Failed:1:no requests added <400>
這個錯誤一般出現在bulk入庫時,是格式不對,每行數據后面都得回車換行,最后一行后要跟空行。
修改格式就可以重新bulk了。
9 No Marvel Data Found (marvel error)
一般是人為刪除(比如在sense插件里執行刪除命令)marvel數據,導致marvel采集出錯(刪除了半天數據,另外半天數據將無法正常采集),不能統計;對於這種情況,等第二天marvel就可以正常使用了。
也有可能是9300端口被占用,marvel默認使用9300端口;對於這種情況,找到9300端口占用進程,kill掉,重啟kibana即可。
10 Bad Request, you must reconsidered your request. <400>
一般是數據格式不對。
11 Invalid numeric value: Leading zeroes not allowed\n <400>
這種情況是整數類型字段格式不正確,比如一個整數等於0000。檢查每個整數字段的數據生成即可。
12 #Deprecation: query malformed, empty clause found at [9:9]
查詢語句不合法,里面含有空大括號。
13 query:string_index_out_of_bounds_exception
查詢數據時,曾遇到這個問題。后來發現是http請求頭格式不對,url里多了一個斜杠,卻報了這個錯誤,特此記錄。
14 failed to obtain node locks
在同一個節點(linux系統)上啟動多個elasticsearch時出現failed to obtain node locks錯誤,啟動失敗.
轉載自:https://www.cnblogs.com/jiu0821/p/6075833.html#_label0_7