elasticsearch 进行分词测试


1,打开kibana:

GET /scddb/_analyze
{
"text": "蓝瘦香菇",
"analyzer": "ik_max_word"   //ik_smart
}

测试分词效果如下,不是很理想:

{
"tokens" : [
{
"token" : "蓝",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "瘦",
"start_offset" : 1,
"end_offset" : 2,
"type" : "CN_CHAR",
"position" : 1
},
{
"token" : "香菇",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 2
}
]
}

添加自定义词库:

参考这里添加自定义IK词库:https://blog.csdn.net/makang456/article/details/79211255

重启:service elasticsearch restart

再测试:

{
"tokens" : [
{
"token" : "蓝瘦香菇",
"start_offset" : 0,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 0
}
]
}

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM