Stemmers
在英語中,一個單詞常常是另一個單詞的“變種”,如:happy=>happiness,這里happy叫做happiness的詞干(stem)。在信息檢索系統中,我們常常做的一件事,就是在Term規范化過程中,提取詞干(stemming),即除去英文單詞分詞變換形式的結尾。
本文主要介紹nltk中Stemmer的用法
Porter Stemmer
應用最為廣泛的、中等復雜程度的、基於后綴剝離的詞干提取算法是波特詞干算法,也叫波特詞干器(Porter Stemmer)。
from nltk.stem.porter import *
stemmer = PorterStemmer()
plurals = ['caresses', 'flies', 'dies', 'mules', 'denied','died', 'agreed', 'owned', 'humbled', 'sized','meeting', 'stating', 'siezing', 'itemization','sensational', 'traditional', 'reference', 'colonizer','plotted']
singles = [stemmer.stem(plural) for plural in plurals]
print(' '.join(singles))
'''
output: caress fli die mule deni die agre own humbl size meet
state siez item sensat tradit refer colon plot
'''
Snowball stemmer
雪球詞干算法(不知道該怎么翻譯=.=)支持多種語言
>>> from nltk.stem.snowball import SnowballStemmer
>>> print(" ".join(SnowballStemmer.languages))
danish dutch english finnish french german hungarian italian
norwegian porter portuguese romanian russian spanish swedish
以英語為例:
>>> stemmer = SnowballStemmer("english")
>>> print(stemmer.stem("running"))
run
可以設置忽略停用詞:
>>> stemmer2 = SnowballStemmer("english", ignore_stopwords=True)
>>> print(stemmer.stem("having"))
have
>>> print(stemmer2.stem("having"))
having
一般來說,SnowballStemmer("english")要比PorterStemmer()更准確。
>>> print(SnowballStemmer("english").stem("generously"))
generous
>>> print(SnowballStemmer("porter").stem("generously"))
gener
LancasterStemmer
也是一種詞干提取器,直接看代碼吧。
>>> from nltk.stem.lancaster import LancasterStemmer
>>> lancaster_stemmer = LancasterStemmer()
>>> lancaster_stemmer.stem(‘maximum’)
‘maxim’
>>> lancaster_stemmer.stem(‘presumably’)
‘presum’
>>> lancaster_stemmer.stem(‘presumably’)
‘presum’
>>> lancaster_stemmer.stem(‘multiply’)
‘multiply’
>>> lancaster_stemmer.stem(‘provision’)
u’provid’
>>> lancaster_stemmer.stem(‘owed’)
‘ow’