ElasticSearch-分詞器analyzer


analyzer  

分詞器使用的兩個情形:  
1,Index time analysis.  創建或者更新文檔時,會對文檔進行分詞
2,Search time analysis.  查詢時,對查詢語句分詞

    指定查詢時使用哪個分詞器的方式有:

  - 查詢時通過analyzer指定分詞器

GET test_index/_search
{
  "query": {
    "match": {
      "name": {
        "query": "lin",
        "analyzer": "standard"
      }
    }
  }
}

- 創建index mapping時指定search_analyzer

PUT test_index
{
  "mappings": {
    "doc": {
      "properties": {
        "title":{
          "type": "text",
          "analyzer": "whitespace",
          "search_analyzer": "standard"
        }
      }
    }
  }
}

索引時分詞是通過配置 Index mapping中的每個字段的參數analyzer指定的

# 不指定分詞時,會使用默認的standard
PUT test_index
{
  "mappings": {
    "doc": {
      "properties": {
        "title":{
          "type": "text",
          "analyzer": "whitespace"     #指定分詞器,es內置有多種analyzer
        }
      }
    }}}

注意:

  •  明確字段是否需要分詞,不需要分詞的字段將type設置為keyword,可以節省空間和提高寫性能。

_analyzer api    

GET _analyze
{
  "analyzer": "standard",
  "text": "this is a test"
}
# 可以查看text的內容使用standard分詞后的結果
{
  "tokens": [
    {
      "token": "this",
      "start_offset": 0,
      "end_offset": 4,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "is",
      "start_offset": 5,
      "end_offset": 7,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "a",
      "start_offset": 8,
      "end_offset": 9,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "test",
      "start_offset": 10,
      "end_offset": 14,
      "type": "<ALPHANUM>",
      "position": 3
    }
  ]
}
View Code

設置analyzer

PUT test
{
  "settings": {
    "analysis": {    #自定義分詞器
      "analyzer": {      # 關鍵字
        "my_analyzer":{   # 自定義的分詞器
          "type":"standard",    #分詞器類型standard
          "stopwords":"_english_"   #standard分詞器的參數,默認的stopwords是\_none_
        }
      }
    }
  },
  "mappings": {
    "doc":{
      "properties": {
        "my_text":{
          "type": "text",
          "analyzer": "standard",  # my_text字段使用standard分詞器
          "fields": {
            "english":{            # my_text.english字段使用上面自定義得my_analyzer分詞器
              "type": "text", 
              "analyzer": "my_analyzer"
            }}}}}}}
POST test/_analyze
{
  "field": "my_text",    # my_text字段使用的是standard分詞器
  "text": ["The test message."]
}
-------------->[the,test,message]

POST test/_analyze
{
  "field": "my_text.english",     #my_text.english使用的是my_analyzer分詞器
  "text": ["The test message."]
}
------------>[test,message]

ES內置了很多種analyzer。比如:

  • standard  由以下組成
    • tokenizer:Standard Tokenizer
    • token filter:Standard Token Filter,Lower Case Token Filter,Stop Token Filter
    • analyzer API測試 :
      POST _analyze
      {
        "analyzer": "standard",
        "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
      }

      結果為:

    • {
        "tokens": [
          {
            "token": "the",
            "start_offset": 0,
            "end_offset": 3,
            "type": "<ALPHANUM>",
            "position": 0
          },
          {
            "token": "2",
            "start_offset": 4,
            "end_offset": 5,
            "type": "<NUM>",
            "position": 1
          },
          {
            "token": "quick",
            "start_offset": 6,
            "end_offset": 11,
            "type": "<ALPHANUM>",
            "position": 2
          },
          {
            "token": "brown",
            "start_offset": 12,
            "end_offset": 17,
            "type": "<ALPHANUM>",
            "position": 3
          },
          {
            "token": "foxes",
            "start_offset": 18,
            "end_offset": 23,
            "type": "<ALPHANUM>",
            "position": 4
          },
          {
            "token": "jumped",
            "start_offset": 24,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 5
          },
          {
            "token": "over",
            "start_offset": 31,
            "end_offset": 35,
            "type": "<ALPHANUM>",
            "position": 6
          },
          {
            "token": "the",
            "start_offset": 36,
            "end_offset": 39,
            "type": "<ALPHANUM>",
            "position": 7
          },
          {
            "token": "lazy",
            "start_offset": 40,
            "end_offset": 44,
            "type": "<ALPHANUM>",
            "position": 8
          },
          {
            "token": "dog's",
            "start_offset": 45,
            "end_offset": 50,
            "type": "<ALPHANUM>",
            "position": 9
          },
          {
            "token": "bone",
            "start_offset": 51,
            "end_offset": 55,
            "type": "<ALPHANUM>",
            "position": 10
          }
        ]
      }
      View Code
  • whitespace  空格為分隔符
POST _analyze
{
  "analyzer": "whitespace",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
-->  [ The,2,QUICK,Brown-Foxes,jumped,over,the,lazy,dog's,bone. ]

  simple 

POST _analyze
{
  "analyzer": "simple",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
---> [ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]

stop   默認stopwords用_english_

POST _analyze
{
  "analyzer": "stop",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
-->[ quick, brown, foxes, jumped, over, lazy, dog, s, bone ]
可選參數:
# stopwords
# stopwords_path

keyword  不分詞的

POST _analyze
{
  "analyzer": "keyword",
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."]
}
得到  "token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." 一條完整的語句

==================================================================================

第三方analyzer插件---中文分詞(ik分詞器)

es內置很多分詞器,但是對中文分詞並不友好,例如使用standard分詞器對一句中文話進行分詞,會分成一個字一個字的。這時可以使用第三方的Analyzer插件,比如 ik、pinyin等。這里以ik為例

1,首先安裝插件,重啟es:

# bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-6.3.0.zip
# /etc/init.d/elasticsearch restart

2,使用示例:

GET _analyze
{
  "analyzer": "ik_max_word",
  "text": "你好嗎?我有一句話要對你說呀。"
}
{
  "tokens": [
    {
      "token": "你好",
      "start_offset": 0,
      "end_offset": 2,
      "type": "CN_WORD",
      "position": 0
    },
    {
      "token": "好嗎",
      "start_offset": 1,
      "end_offset": 3,
      "type": "CN_WORD",
      "position": 1
    },
    {
      "token": "我",
      "start_offset": 4,
      "end_offset": 5,
      "type": "CN_CHAR",
      "position": 2
    },
    {
      "token": "有",
      "start_offset": 5,
      "end_offset": 6,
      "type": "CN_CHAR",
      "position": 3
    },
    {
      "token": "一句話",
      "start_offset": 6,
      "end_offset": 9,
      "type": "CN_WORD",
      "position": 4
    },
    {
      "token": "一句",
      "start_offset": 6,
      "end_offset": 8,
      "type": "CN_WORD",
      "position": 5
    },
    {
      "token": "一",
      "start_offset": 6,
      "end_offset": 7,
      "type": "TYPE_CNUM",
      "position": 6
    },
    {
      "token": "句話",
      "start_offset": 7,
      "end_offset": 9,
      "type": "CN_WORD",
      "position": 7
    },
    {
      "token": "句",
      "start_offset": 7,
      "end_offset": 8,
      "type": "COUNT",
      "position": 8
    },
    {
      "token": "話",
      "start_offset": 8,
      "end_offset": 9,
      "type": "CN_CHAR",
      "position": 9
    },
    {
      "token": "要對",
      "start_offset": 9,
      "end_offset": 11,
      "type": "CN_WORD",
      "position": 10
    },
    {
      "token": "你",
      "start_offset": 11,
      "end_offset": 12,
      "type": "CN_CHAR",
      "position": 11
    },
    {
      "token": "說呀",
      "start_offset": 12,
      "end_offset": 14,
      "type": "CN_WORD",
      "position": 12
    }
  ]
}

分詞結果
View Code

參考:https://github.com/medcl/elasticsearch-analysis-ik

還可以用內置的 character filter, tokenizer, token filter 組裝一個analyzer(custom analyzer)

  • custom  定制analyzer,由以下幾部分組成
    • 0個或多個e character filters
    • 1個tokenizer
    • 0個或多個 token filters

   

PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer":{
          "type":"custom",
          "tokenizer":"standard",
          "char_filter":["html_strip"],
          "filter":["lowercase"]
        }
      }
    }
  }
}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's <b> bone.</b>"]
}
得到:[the,2,quick,brown,foxes,jumped,over,the,lazy,dog's,bone]
View Code

自定義分詞器

自定義分詞需要在索引的配置中設定,如下所示:

PUT test_index
{
  "settings": {
    "analysis": {    # 分詞設置,可以自定義
      "char_filter": {},   #char_filter  關鍵字
      "tokenizer": {},    #tokenizer 關鍵字
      "filter": {},     #filter  關鍵字
      "analyzer": {}    #analyzer 關鍵字
    }
  }
}

character filter  在tokenizer之前對原始文本進行處理,比如增加,刪除,替換字符等

會影響后續tokenizer解析的position和offset信息

html strip  除去html標簽和轉換html實體  

(1)參數:escaped_tags不刪除的標簽

POST _analyze
{
  "tokenizer": "keyword",
  "char_filter": ["html_strip"],
  "text": ["<p>I&apos;m so <b>happy</b>!</p>"]
}
得到:
      "token": """

I'm so happy!

"""
#配置示例
PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {  #關鍵字
        "my_analyzer":{   #自定義analyzer
          "tokenizer":"keyword",
          "char_filter":["my_char_filter"]
        }
      },
      "char_filter": {  #關鍵字
        "my_char_filter":{   #自定義char_filter
          "type":"html_strip",
          "escaped_tags":["b"]  #不從文本中刪除的HTML標記數組
        }
      }}}}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": ["<p>I&apos;m so <b>happy</b>!</p>"]
}
得到:
      "token": """

I'm so <b>happy</b>!

""",
View Code

mapping    映射類型,以下參數必須二選一

(1)mappings 指定一組映射,每個映射格式為 key=>value

(2)mappings_path 絕對路徑或者相對於config路徑   key=>value

PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {     #關鍵字
        "my_analyzer":{   #自定義分詞器
          "tokenizer":"standard",
          "char_filter":"my_char_filter"  
        }
      },
      "char_filter": {    #關鍵字
        "my_char_filter":{  #自定義char_filter
          "type":"mapping", 
          "mappings":[       #指明映射關系
            ":)=>happy",
            ":(=>sad"
          ]
        }}}}}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": ["i am so :)"]
}得到 [i,am,so,happy]

pattern replace

(1)pattern參數  正則

(2)replacement 替換字符串 可以使用$1..$9

(3)flags  正則標志

tokenizer  將原始文檔按照一定規則切分為單詞

standard-------參數:max_token_length,最大token長度,默認是255

PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer":{
          "tokenizer":"my_tokenizer"
        }
      },
      "tokenizer": { 
        "my_tokenizer":{
          "type":"standard",
          "max_token_length":5      
        }}}}}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."]
}
得到   [ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]
# jumped 長度為6  在5這個位置被分割
View Code

letter    非字母時分成多個terms

POST _analyze
{
  "tokenizer": "letter",
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."]
}
得到 [ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ]
View Code

lowcase  跟letter tokenizer一樣 ,同時將字母轉化成小寫

POST _analyze
{
  "tokenizer": "lowercase",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
得到  [ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
View Code

whitespace   按照空白字符分成多個terms----參數:max_token_length

POST _analyze
{
  "tokenizer": "whitespace",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
得到 [ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ]

keyword   空操作,輸出完全相同的文本-----參數:buffer_size,單詞一個term讀入緩沖區的長度,默認256

POST _analyze
{
  "tokenizer": "keyword",
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."]
}
得到"token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." 一個完整的文本

token filter   針對tokenizer 輸出的單詞進行增刪改等操作----lowercase  將輸出的單詞轉化成小寫

POST _analyze
{
  "filter": ["lowercase"],
  "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's  bone"]
}
--->
"token": "the 2 quick brown-foxes jumped over the lazy dog's  bone"

PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer":{
          "type":"custom", 
          "tokenizer":"standard", 
          "filter":"lowercase"
        }
      }
    }
  }
}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
    "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's  bone"]
}

stop  從token流中刪除stop words 。

參數有:
# stopwords   要使用的stopwords, 默認_english_
# stopwords_path
# ignore_case   設置為true則為小寫,默認false# remove_trailing
PUT t_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer":{
          "type":"custom",
          "tokenizer":"standard",
          "filter":"my_filter"
        }
      },
      "filter": {
        "my_filter":{
          "type":"stop",
          "stopwords":["and","or","not"]
        }
      }
    }
  }
}
POST t_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": ["lucky and happy not sad"]
}-------------->[lucky,happy,sad]
View Code

 原文地址


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM