1,打開kibana:
GET /scddb/_analyze
{
"text": "藍瘦香菇",
"analyzer": "ik_max_word" //ik_smart
}
測試分詞效果如下,不是很理想:
{
"tokens" : [
{
"token" : "藍",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "瘦",
"start_offset" : 1,
"end_offset" : 2,
"type" : "CN_CHAR",
"position" : 1
},
{
"token" : "香菇",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 2
}
]
}
添加自定義詞庫:
參考這里添加自定義IK詞庫:https://blog.csdn.net/makang456/article/details/79211255
重啟:service elasticsearch restart
再測試:
{
"tokens" : [
{
"token" : "藍瘦香菇",
"start_offset" : 0,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 0
}
]
}