LDA(Latent Dirichlet allocation)模型是一种常用而用途广泛地概率主题模型。其实现一般通过Variational inference和Gibbs Samping实现。
这里贴出基于Python的第三方模块改写的LDA类及实现。
1 #coding:utf-8 2 import numpy as np 3 import lda 4 import lda.datasets 5 import jieba 6 import codecs 7 8 9 class LDA_v20161130(): 10 def __init__(self, topics=2): 11 self.n_topic = topics 12 self.corpus = None 13 self.vocab = None 14 self.ppCountMatrix = None 15 self.stop_words = [u',', u'。', u'、', u'(', u')', u'·', u'!', u' ', u':', u'“', u'”', u'\n'] 16 self.model = None 17 18 def loadCorpusFromFile(self, fn): 19 # 中文分词 20 f = open(fn, 'r') 21 text = f.readlines() 22 text = r' '.join(text) 23 24 seg_generator = jieba.cut(text) 25 seg_list = [i for i in seg_generator if i not in self.stop_words] 26 seg_list = r' '.join(seg_list) 27 # 切割统计所有出现的词纳入词典 28 seglist = seg_list.split(" ") 29 self.vocab = [] 30 for word in seglist: 31 if (word != u' ' and word not in self.vocab): 32 self.vocab.append(word) 33 34 CountMatrix = [] 35 f.seek(0, 0) 36 # 统计每个文档中出现的词频 37 for line in f: 38 # 置零 39 count = np.zeros(len(self.vocab),dtype=np.int) 40 text = line.strip() 41 # 但还是要先分词 42 seg_generator = jieba.cut(text) 43 seg_list = [i for i in seg_generator if i not in self.stop_words] 44 seg_list = r' '.join(seg_list) 45 seglist = seg_list.split(" ") 46 # 查询词典中的词出现的词频 47 for word in seglist: 48 if word in self.vocab: 49 count[self.vocab.index(word)] += 1 50 CountMatrix.append(count) 51 f.close() 52 #self.ppCountMatrix = (len(CountMatrix), len(self.vocab)) 53 self.ppCountMatrix = np.array(CountMatrix) 54 55 print "load corpus from %s success!"%fn 56 57 def setStopWords(self, word_list): 58 self.stop_words = word_list 59 60 def fitModel(self, n_iter = 1500, _alpha = 0.1, _eta = 0.01): 61 self.model = lda.LDA(n_topics=self.n_topic, n_iter=n_iter, alpha=_alpha, eta= _eta, random_state= 1) 62 self.model.fit(self.ppCountMatrix) 63 64 def printTopic_Word(self, n_top_word = 8): 65 for i, topic_dist in enumerate(self.model.topic_word_): 66 topic_words = np.array(self.vocab)[np.argsort(topic_dist)][:-(n_top_word + 1):-1] 67 print "Topic:",i,"\t", 68 for word in topic_words: 69 print word, 70 print 71 72 def printDoc_Topic(self): 73 for i in range(len(self.ppCountMatrix)): 74 print ("Doc %d:((top topic:%s) topic distribution:%s)"%(i, self.model.doc_topic_[i].argmax(),self.model.doc_topic_[i])) 75 76 def printVocabulary(self): 77 print "vocabulary:" 78 for word in self.vocab: 79 print word, 80 print 81 82 def saveVocabulary(self, fn): 83 f = codecs.open(fn, 'w', 'utf-8') 84 for word in self.vocab: 85 f.write("%s\n"%word) 86 f.close() 87 88 def saveTopic_Words(self, fn, n_top_word = -1): 89 if n_top_word==-1: 90 n_top_word = len(self.vocab) 91 f = codecs.open(fn, 'w', 'utf-8') 92 for i, topic_dist in enumerate(self.model.topic_word_): 93 topic_words = np.array(self.vocab)[np.argsort(topic_dist)][:-(n_top_word + 1):-1] 94 f.write( "Topic:%d\t"%i) 95 for word in topic_words: 96 f.write("%s "%word) 97 f.write("\n") 98 f.close() 99 100 def saveDoc_Topic(self, fn): 101 f = codecs.open(fn, 'w', 'utf-8') 102 for i in range(len(self.ppCountMatrix)): 103 f.write("Doc %d:((top topic:%s) topic distribution:%s)\n" % (i, self.model.doc_topic_[i].argmax(), self.model.doc_topic_[i])) 104 f.close()
1 if __name__=="__main__": 2 _lda = LDA_v20161130(topics=20) 3 stop = [u'!', u'@', u'#', u',',u'.',u'/',u';',u' ',u'[',u']',u'$',u'%',u'^',u'&',u'*',u'(',u')', 4 u'"',u':',u'<',u'>',u'?',u'{',u'}',u'=',u'+',u'_',u'-',u''''''''] 5 _lda.setStopWords(stop) 6 _lda.loadCorpusFromFile(u'C:\\Users\Administrator\Desktop\\BBC.txt') 7 _lda.fitModel(n_iter=1500) 8 _lda.printTopic_Word(n_top_word=10) 9 _lda.printDoc_Topic() 10 _lda.saveVocabulary(u'C:\\Users\Administrator\Desktop\\vocab.txt') 11 _lda.saveTopic_Words(u'C:\\Users\Administrator\Desktop\\topic_word.txt') 12 _lda.saveDoc_Topic(u'C:\\Users\Administrator\Desktop\\doc_topic.txt')
因为语料全部为英文,因此这里的stop_words全部设置为英文符号,主题设置20个,迭代1500次。结果显示,文档148篇,词典1347词,总词数4174
Topic_words部分输出如下:
Topic: 0 to will and of he be trumps the what policy
Topic: 1 he would in said not no with mr this but
Topic: 2 for or can some whether have change health obamacare insurance
Topic: 3 the to that president as of us also first all
Topic: 4 trump to when with now were republican mr office presidential
Topic: 5 the his trump from uk who president to american house
Topic: 6 a to that was it by issue vote while marriage
Topic: 7 the to of an are they which by could from
Topic: 8 of the states one votes planned won two new clinton
Topic: 9 in us a use for obama law entry new interview
Topic: 10 and on immigration has that there website vetting action given
Doc_Topic部分输出如下:
Doc 0:((top topic:4) topic distribution:[ 0.02972973 0.0027027 0.0027027 0.16486486 0.32702703 0.19189189
0.0027027 0.0027027 0.02972973 0.0027027 0.02972973 0.0027027
0.0027027 0.0027027 0.02972973 0.0027027 0.02972973 0.0027027
0.13783784 0.0027027 ])
Doc 1:((top topic:18) topic distribution:[ 0.21 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01 0.31 0.21])
Doc 2:((top topic:18) topic distribution:[ 0.02075472 0.00188679 0.03962264 0.00188679 0.00188679 0.00188679
0.00188679 0.15283019 0.00188679 0.02075472 0.00188679 0.24716981
0.00188679 0.07735849 0.00188679 0.00188679 0.00188679 0.00188679
0.41698113 0.00188679])
当然,对于英文语料,需要排除大部分的虚词以及常用无意义词,例如it, this, there, that...在实际操作中,需要合理地设置参数。
换中文语料尝试,采用习大大就卡斯特罗逝世发表的吊唁文章和朴槿惠辞职的新闻。
Topic: 0 的 同志 和 人民 卡斯特罗 菲德尔 古巴 他 了 我
Topic: 1 在 朴槿惠 向 表示 总统 对 将 的 月 国民
Doc 0:((top topic:0) topic distribution:[ 0.91714123 0.08285877])
Doc 1:((top topic:1) topic distribution:[ 0.09200666 0.90799334])
还是存在一些虚词,例如“的”,“和”,“了”,“对”等词的干扰,但是大致来说,两则新闻的主题分布很明显,效果还不赖。
本文来自于:Python—LDA实现