首先先對《叮咚!院“十佳”優秀經管青年組團出道,快來打call~》這篇微信文章分析,查看網頁源代碼可以發現,整篇文章的文字部分以層次關系分別在<div id = “js_article”> --> <div class = “rich_media_inner”> --> <div id = “page_content> --> <div class = “rich_media_area_primary> --><div id = “img-content”> --> <div class = “rich_media_content”> 的標簽之下,利用BeautifulSoup的find_all方法就可以找到class為rich_media_content的div之下的內容。網頁的源代碼層級如下
以下為代碼片段,把正文部分爬取下來之后存儲到txt文件中,方便接下來的詞頻詞雲分析。
#叮咚!院“十佳”優秀經管青年組團出道,快來打call~ import requests import re from bs4 import BeautifulSoup url = "https://mp.weixin.qq.com/s?__biz=MzI3MTc1NDExOQ==&mid=2247498465&idx=1&sn=6a8f71343b04d97c79687c7d71ccc0f1&chksm=eb3e4c09dc49c51f217fe4c5a22ba54b78213378640da078217bd375caf1406c420a615d7dfe&mpshare=1&scene=23&srcid=&sharer_sharetime=1591921147875&sharer_shareid=b9489319d498f78fa93ed3b25882d1f9#rd" headers={ 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36' } r = requests.get(url,headers=headers,timeout=5) soup1 = BeautifulSoup(r.text,"lxml") text1 = soup.find_all("div" , class_ = "rich_media_content") print(text1[0].get_text()) jgxytext = text[0].get_text() txt = open("jgxysj.txt" , "a+" , encoding = "utf-8") txt.write(jgxytext) txt.close()
代碼運行的部分結果如下圖所示:
對《喜訊 | 我院三個團支部榮獲“福州大學十佳團支部立項”榮譽稱號》這篇文章的分析也是類似的過程,正文部分也是在<div class = “rich_media_content”>的標簽下,網頁源代碼如下圖所示:
代碼片段如下,把爬取到的文字部分追加到jgxysj.txt的文件下。
#喜訊 | 我院三個團支部榮獲“福州大學十佳團支部立項”榮譽稱號 import requests import re from bs4 import BeautifulSoup url = "https://mp.weixin.qq.com/s?__biz=MzI3MTc1NDExOQ==&mid=2247498465&idx=2&sn=23f92d8bf222d3ad246de846e59cc517&chksm=eb3e4c09dc49c51f7417b81be7248fdc13caa3b12b1dcf9cb7054747e58ca3cd2105bd4a77cd&mpshare=1&scene=23&srcid=&sharer_sharetime=1591921420851&sharer_shareid=b9489319d498f78fa93ed3b25882d1f9#rd" headers={ 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36' } r = requests.get(url,headers=headers,timeout=5) soup2 = BeautifulSoup(r.text,"lxml") text2 = soup.find_all("div" , class_ = "rich_media_content") print(text2[0].get_text()) jgxytext = text[0].get_text() txt = open("jgxysj.txt" , "a+" , encoding = "utf-8") txt.write(jgxytext) txt.close()
接下來進行詞頻分析,關鍵是使用jieba庫進行分割詞組,並統計各個詞出現的出現頻率。代碼中的jgxy_list列表用於保存分割得到的詞語,count字典的key為各詞語,values為各個詞語出現的次數,在統計的過程中過濾掉長度為1的詞語。代碼如下所示:
#詞頻分析 import jieba jgxy = open('jgxysj.txt',"r",encoding = "utf-8").read() text = jieba.lcut(jgxy) counts = {} jgxy_list = [] for word in text: if len(word) == 1: #退出一個字的詞 continue else: counts[word]=counts.get(word,0)+1 jgxy_list.append(word.replace(" ","")) cloud_text=",".join(jgxy_list) items = list(counts.items()) items.sort(key=lambda x:x[1],reverse=True) for i in range(200): word,count=items[i] print("{0:<10}{1:>5}".format(word,count))
代碼執行結果如下圖所示:
可以看到,兩篇文章中出現最多的幾個詞語是“福州大學”、“學生”、“學院”等詞語,但是有一些詞對文本的分析可能會有干擾,在繪制詞雲圖時選擇把這些詞設置為stopwords。同時,把中國地圖作為詞雲圖的背景進行繪制。詞雲繪制的代碼如下:
#繪制雲圖 from PIL import Image import numpy as np from wordcloud import WordCloud,ImageColorGenerator from matplotlib import pyplot as plt cloud_mask = np.array(Image.open("ChinaMap.jpg")) st = set(["福州大學","同學","學院","獲獎","感言","學生"]) jgxywd = WordCloud( background_color = "white", mask = cloud_mask, max_words = 200, font_path = "STXINGKA.TTF", min_font_size = 10, max_font_size = 50, width = 600, height = 600, stopwords = st ) jgxywd.generate(cloud_text) jgxywd.to_file("jgxywordcloud.PNG")
可以看到,兩篇文章合成的文本所繪制的詞雲中,出現比較多的詞語有“個人事跡”、“榮譽”、“共青團干部”、“大賽”、“工作”等,我們可以推斷,取得院十佳大學生的同學們大都做過一些學生工作,多為共青團干部,參加過一些比賽,取得過一定的榮譽。所以,希望自己能多參與學生活動,積極學習,讓自己變得更加優秀。