下載一長篇中文文章。
從文件讀取待分析文本。
news = open('gzccnews.txt','r',encoding = 'utf-8')
安裝與使用jieba進行中文分詞。
pip install jieba
import jieba
list(jieba.lcut(news))
生成詞頻統計
排序
排除語法型詞匯,代詞、冠詞、連詞
輸出詞頻最大TOP20
代碼
import jieba with open('novel.txt','r',encoding="utf-8") as file: novel = file.read() punctuation = '。,;!?、' for l in punctuation: novel = novel.replace(l,'') no_list = list(jieba.cut(novel)) dic = dict() for i in no_list: if len(i)!=1: dic[i] = novel.count(i) del_word = { '\n',' '} for i in del_word: if i in dic: del dic[i] dic = sorted(dic.items(),key=lambda x:x[1],reverse = True) for i in range(20): print(dic[i])
截圖如下