使用Python统计文件中词频,并且生成词云


总体思路

  • 导入想要产生词云的文章或者段落
  • 对导入的文字进行jieba分词
  • 统计分词之后的词频
  • 生成并绘制词云

Demo

from wordcloud import WordCloud
import matplotlib.pyplot as plt
import jieba

# Now, There is no 'word.txt' under this path
path_txt = "/home/alan/Desktop/word.txt"

f = open(path_txt, 'r', encoding = 'UTF-8').read()

cut_text = " ".join(jieba.cut(f))

wordcloud = WordCloud(
    font_path = "/home/alan/.local/share/fonts/STKAITI.TTF",
    background_color="white",
    width=1000,
    height = 800
    ).generate(cut_text)

plt.imshow(wordcloud, interpolation = "bilinear")
plt.axis("off")
plt.show()


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM