詞移距離(Word Mover's Distance)是在詞向量的基礎上發展而來的用來衡量文檔相似性的度量。
詞移距離的具體介紹參考
http://blog.csdn.net/qrlhl/article/details/78512598
或網上的其他資料
詞移距離的gensim官方例子在https://github.com/RaRe-Technologies/gensim/blob/c971411c09773488dbdd899754537c0d1a9fce50/docs/notebooks/WMD_tutorial.ipynb
此處,用詞移距離來衡量唐詩詩句的相關性。為什么用唐詩?因為全唐詩的txt很容易獲取,隨便一搜就可以下載了。全唐詩txt鏈接:https://files.cnblogs.com/files/combfish/%E5%85%A8%E5%94%90%E8%AF%97.zip。
步驟:
1. 預處理語料集: 唐詩的斷句分詞,斷句基於標點符號,分詞依靠結巴分詞
2. gensim訓練詞向量模型與wmd相似性模型
3. 查詢
代碼:
import jieba
from nltk import word_tokenize
from nltk.corpus import stopwords
from time import time
start_nb = time()
import logging
print(20*'*','loading data',40*'*')
f=open('全唐詩.txt',encoding='utf-8')
lines=f.readlines()
corpus=[]
documents=[]
useless=[',','.','(',')','!','?','\'','\"',':','<','>',
',', '。', '(', ')', '!', '?', '’', '“',':','《','》','[',']','【','】']
for each in lines:
each=each.replace('\n','')
each.replace('-','')
each=each.strip()
each=each.replace(' ','')
if(len(each)>3):
if(each[0]!='卷'):
documents.append(each)
each=list(jieba.cut(each))
text=[w for w in each if not w in useless]
corpus.append(text)
print(len(corpus))
print(20*'*','trainning models',40*'*')
from gensim.models import Word2Vec
model = Word2Vec(corpus, workers=3, size=100)
# Initialize WmdSimilarity.
from gensim.similarities import WmdSimilarity
num_best = 10
instance = WmdSimilarity(corpus, model, num_best=10)
print(20*'*','testing',40*'*')
while True:
sent = input('輸入查詢語句: ')
sent_w = list(jieba.cut(sent))
query = [w for w in sent_w if not w in useless]
sims = instance[query] # A query is simply a "look-up" in the similarity class.
# Print the query and the retrieved documents, together with their similarities.
print('Query:')
print(sent)
for i in range(num_best):
print
print('sim = %.4f' % sims[i][1])
print(documents[sims[i][0]])
結果:從結果kan
<wiz_tmp_tag id="wiz-table-range-border" contenteditable="false" style="display: none;">
