Python爬蟲實戰,Scrapy實戰,大眾點評爬蟲


前言

爬一波大眾點評上美食板塊的數據,順便再把爬到的數據做一波可視化分析

開發工具

Python版本:3.6.4

相關模塊:

scrapy模塊;

requests模塊;

fontTools模塊;

pyecharts模塊;

以及一些python自帶的模塊。

環境搭建

安裝python並添加到環境變量,pip安裝需要的相關模塊即可。

數據爬取

首先,我們新建一個名為大眾點評的scrapy項目:

scrapy startproject dazhongdianping

效果如下:

然后去大眾點評踩個點吧,這里以杭州為例:

http://www.dianping.com/hangzhou/ch10

顯然,我們想爬取的數據如下圖紅框所示:

在items.py里定義一下這些數據類型:

'''定義要爬取的數據'''
class DazhongdianpingItem(scrapy.Item):
    # 店名
    shopname = scrapy.Field()
    # 點評數量
    num_comments = scrapy.Field()
    # 人均價格
    avg_price = scrapy.Field()
    # 美食類型
    food_type = scrapy.Field()
    # 所在商區
    business_district_name = scrapy.Field()
    # 具體位置
    location = scrapy.Field()
    # 口味評分
    taste_score = scrapy.Field()
    # 環境評分
    environment_score = scrapy.Field()
    # 服務評分
    serve_score = scrapy.Field()

然后利用正則表達式來提取網頁中我們想要的數據(字體反爬我就不講了,知乎隨便搜一下,就好多相關的文章T_T。只要下載對應的字體文件,然后找到對應的映射關系就ok啦):

# 提取我們想要的數據
all_infos = re.findall(r'<li class="" >(.*?)<div class="operate J_operate Hide">', response.text, re.S|re.M)
for info in all_infos:
    item = DazhongdianpingItem()
    # --店名
    item['shopname'] = re.findall(r'<h4>(.*?)<\/h4>', info, re.S|re.M)[0]
    # --點評數量
    try:
        num_comments = re.findall(r'LXAnalytics\(\'moduleClick\', \'shopreview\'\).*?>(.*?)<\/b>', info, re.S|re.M)[0]
        num_comments = ''.join(re.findall(r'>(.*?)<', num_comments, re.S|re.M))
        for k, v in shopnum_crack_dict.items():
            num_comments = num_comments.replace(k, str(v))
        item['num_comments'] = num_comments
    except:
        item['num_comments'] = 'null'
    # --人均價格
    try:
        avg_price = re.findall(r'<b>¥(.*?)<\/b>', info, re.S|re.M)[0]
        avg_price = ''.join(re.findall(r'>(.*?)<', avg_price, re.S|re.M))
        for k, v in shopnum_crack_dict.items():
            avg_price = avg_price.replace(k, str(v))
        item['avg_price'] = avg_price
    except:
        item['avg_price'] = 'null'
    # --美食類型
    food_type = re.findall(r'<a.*?data-click-name="shop_tag_cate_click".*?>(.*?)<\/span>', info, re.S|re.M)[0]
    food_type = ''.join(re.findall(r'>(.*?)<', food_type, re.S|re.M))
    for k, v in tagname_crack_dict.items():
        food_type = food_type.replace(k, str(v))
    item['food_type'] = food_type
    # --所在商區
    business_district_name = re.findall(r'<a.*?data-click-name="shop_tag_region_click".*?>(.*?)<\/span>', info, re.S|re.M)[0]
    business_district_name = ''.join(re.findall(r'>(.*?)<', business_district_name, re.S|re.M))
    for k, v in tagname_crack_dict.items():
        business_district_name = business_district_name.replace(k, str(v))
    item['business_district_name'] = business_district_name
    # --具體位置
    location = re.findall(r'<span class="addr">(.*?)<\/span>', info, re.S|re.M)[0]
    location = ''.join(re.findall(r'>(.*?)<', location, re.S|re.M))
    for k, v in address_crack_dict.items():
        location = location.replace(k, str(v))
    item['location'] = location
    # --口味評分
    try:
        taste_score = re.findall(r'口味<b>(.*?)<\/b>', info, re.S|re.M)[0]
        taste_score = ''.join(re.findall(r'>(.*?)<', taste_score, re.S|re.M))
        for k, v in shopnum_crack_dict.items():
            taste_score = taste_score.replace(k, str(v))
        item['taste_score'] = taste_score
    except:
        item['taste_score'] = 'null'
    # --環境評分
    try:
        environment_score = re.findall(r'環境<b>(.*?)<\/b>', info, re.S|re.M)[0]
        environment_score = ''.join(re.findall(r'>(.*?)<', environment_score, re.S|re.M))
        for k, v in shopnum_crack_dict.items():
            environment_score = environment_score.replace(k, str(v))
        item['environment_score'] = environment_score
    except:
        item['environment_score'] = 'null'
    # --服務評分
    try:
        serve_score = re.findall(r'服務<b>(.*?)<\/b>', info, re.S|re.M)[0]
        serve_score = ''.join(re.findall(r'>(.*?)<', serve_score, re.S|re.M))
        for k, v in shopnum_crack_dict.items():
            serve_score = serve_score.replace(k, str(v))
        item['serve_score'] = serve_score
    except:
        item['serve_score'] = 'null'
    # --yield
    yield item

最后在終端運行如下命令就可以爬取我們想要的數據啦:

scrapy crawl dazhongdianping -o infos.json -t json

文章到這里就結束了,感謝你的觀看,關注我每天分享Python爬蟲實戰系列,下篇文章分享中國地震台網爬蟲。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM