Elasticsearch 常見錯誤


一 read_only_allow_delete" : "true"

當我們在向某個索引添加一條數據的時候,可能(極少情況)會碰到下面的報錯:

{
  "error": {
    "root_cause": [
      {
        "type": "cluster_block_exception",
        "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
      }
    ],
    "type": "cluster_block_exception",
    "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
  },
  "status": 403
}

上述報錯是說索引現在的狀態是只讀模式(read-only),如果查看該索引此時的狀態:

GET z1/_settings
# 結果如下
{
  "z1" : {
    "settings" : {
      "index" : {
        "number_of_shards" : "5",
        "blocks" : {
          "read_only_allow_delete" : "true"
        },
        "provided_name" : "z1",
        "creation_date" : "1556204559161",
        "number_of_replicas" : "1",
        "uuid" : "3PEevS9xSm-r3tw54p0o9w",
        "version" : {
          "created" : "6050499"
        }
      }
    }
  }
}

可以看到"read_only_allow_delete" : "true",說明此時無法插入數據,當然,我們也可以模擬出來這個錯誤:

PUT z1
{
  "mappings": {
    "doc": {
      "properties": {
        "title": {
          "type":"text"
        }
      }
    }
  },
  "settings": {
    "index.blocks.read_only_allow_delete": true
  }
}

PUT z1/doc/1
{
  "title": "es真難學"
}

現在我們如果執行插入數據,就會報開始的錯誤。那么怎么解決呢?

  • 清理磁盤,使占用率低於85%。
  • 手動調整該項,具體參考官網

這里介紹一種,我們將該字段重新設置為:

PUT z1/_settings
{
  "index.blocks.read_only_allow_delete": null
}

現在再查看該索引就正常了,也可以正常的插入數據和查詢了。

二 illegal_argument_exception

有時候,在聚合中,我們會發現如下報錯:

{
  "error": {
    "root_cause": [
      {
        "type": "illegal_argument_exception",
        "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
      }
    ],
    "type": "search_phase_execution_exception",
    "reason": "all shards failed",
    "phase": "query",
    "grouped": true,
    "failed_shards": [
      {
        "shard": 0,
        "index": "z2",
        "node": "NRwiP9PLRFCTJA7w3H9eqA",
        "reason": {
          "type": "illegal_argument_exception",
          "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
        }
      }
    ],
    "caused_by": {
      "type": "illegal_argument_exception",
      "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.",
      "caused_by": {
        "type": "illegal_argument_exception",
        "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [age] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
      }
    }
  },
  "status": 400
}

這是怎么回事呢?是因為,聚合查詢時,指定字段不能是text類型。比如下列示例:

PUT z2/doc/1
{
  "age":"18"
}
PUT z2/doc/2
{
  "age":20
}

GET z2/doc/_search
{
  "query": {
    "match_all": {}
  },
  "aggs": {
    "my_sum": {
      "sum": {
        "field": "age"
      }
    }
  }
}

當我們向elasticsearch中,添加一條數據時(此時,如果索引存在則直接新增或者更新文檔,不存在則先創建索引),首先檢查該age字段的映射類型。如上示例中,我們添加第一篇文檔時(z1索引不存在),elasticsearch會自動的創建索引,然后為age字段創建映射關系(es就猜此時age字段的值是什么類型,如果發現是text類型,那么存儲該字段的映射類型就是text),此時age字段的值是text類型,所以,第二條插入數據,age的值也是text類型,而不是我們看到的long類型。我們可以查看一下該索引的mappings信息:

GET z2/_mapping
# mapping信息如下
{
  "z2" : {
    "mappings" : {
      "doc" : {
        "properties" : {
          "age" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }
      }
    }
  }
}

上述返回結果發現,age類型是text。而該類型又不支持聚合,所以,就會報錯了。解決辦法就是:

  • 如果選擇動態創建一篇文檔,映射關系取決於你添加的第一條文檔的各字段都對應什么類型。而不是我們看到的那樣,第一次是text,第二次不加引號,就是long類型了不是這樣的。
  • 如果嫌棄上面的解決辦法麻煩,那就選擇手動創建映射關系。首先指定好各字段對應什么類型。后續才不至於出錯。

三 Result window is too large

很多時候,我們在查詢文檔時,一次查詢結果很可能會有很多,而elasticsearch一次返回多少條結果,由size參數決定:

GET e2/doc/_search
{
  "size": 100000,
  "query": {
    "match_all": {}
  }
}

而默認是最多范圍一萬條,那么當我們的請求超過一萬條時(比如有十萬條),就會報:

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.

意思是一次請求返回的結果太大,可以另行參考 scroll API或者設置index.max_result_window參數手動調整size的最大默認值:

# kibana中設置
PUT e2/_settings
{
  "index": {
    "max_result_window": "100000"
  }
}
# Python中設置
from elasticsearch import Elasticsearch
es = Elasticsearch()
es.indices.put_settings(index='e2', body={"index": {"max_result_window": 100000}})

如上例,我們手動調整索引e2size參數最大默認值到十萬,這時,一次查詢結果只要不超過10萬就都會一次返回。
注意,這個設置對於索引essize參數是永久生效的。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM