1. 引言
Scrapy框架結構清晰,基於twisted的異步架構可以充分利用計算機資源,是爬蟲做大的必備基礎。本文將講解如何快速安裝此框架並使用起來。
2. 安裝Twisted
2.1 同安裝Lxml庫
(參考《為編寫網絡爬蟲程序安裝Python3.5》3.1節)一樣,通過下載對應版本的.whl文件先安裝twisted庫,下載地址:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
2.2 安裝twisted
打開命令提示符窗口,輸入命令:
pip install E:\demo\Twisted-16.4.1-cp35-cp35m-win_amd64.whl(下載好的twisted模塊的whl文件路徑)
3. 安裝scrapy
twisted庫安裝成功后,安裝scrapy就簡單了,在命令提示符窗口直接輸入命令: pip install scrapy 回車
安裝關聯模塊pypiwin32,在命令提示符窗口直接輸入命令: pip install pypiwin32 回車
4. Scrapy測試,敲一個基於Scrapy框架的爬蟲程序
新建一個Scrapy爬蟲項目fourth(因為這是繼Python3.5安裝的第四篇教程,有興趣的話請從頭看起):在任意目錄按住shift+右鍵->選擇在此處打開命令提示符窗口(這里默認為E:\demo),然后輸入命令:
E:\demo>scrapy startproject fourth
該命令將會創建包含下列內容的fourth目錄:
fourth/ scrapy.cfg fourth/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py ...
修改項目配置文件settings.py,有些網站會在根目錄下放置一個名字為robots.txt的文件,里面聲明了此網站希望爬蟲遵守的規范,Scrapy默認遵守這個文件制定的規范,即ROBOTSTXT_OBEY默認值為True。在這里需要修改ROBOTSTXT_OBEY的值,找到項目目錄(這里為:E:\demo\fourth\fourth)下文件settings.py,更改ROBOTSTXT_OBEY的值為False
引入Gooseeker最新規則提取器模塊gooseeker.py(下載地址:https://github.com/FullerHua/gooseeker/tree/master/core),拷貝到項目目錄下,這里為E:\demo\fourth\gooseeker.py
創建爬蟲模塊,進入項目目錄E:\demo\fourth下,在此處打開命提示符窗口輸入命令:
E:\demo\fourth>scrapy genspider anjuke 'anjuke.com'
該命令將會在項目目錄E:\demo\fourth\fourth\spiders下創建模塊文件anjuke.py,以記事本打開然后添加代碼,主要代碼:
# -*- coding: utf-8 -*- # Scrapy spider 模塊 # 采集安居客房源信息 # 采集結果保存在anjuke-result.xml中 import os import time import scrapy from gooseeker import GsExtractor class AnjukeSpider(scrapy.Spider): name = "anjuke" allowed_domains = ["'anjuke.com'"] start_urls = ( 'http://bj.zu.anjuke.com/fangyuan/p1', ) def parse(self, response): print("----------------------------------------------------------------------------") # 引用提取器 bbsExtra = GsExtractor() # 設置xslt抓取規則 bbsExtra.setXsltFromAPI("31d24931e043e2d5364d03b8ff9cc77e", "安居客_房源") # 調用extract方法提取所需內容 result = bbsExtra.extractHTML(response.body) # 打印采集結果 print(str(result).encode('gbk','ignore').decode('gbk')) # 保存采集結果 file_path = os.getcwd() + "/anjuke-result.xml" open(file_path,"wb").write(result) # 打印結果存放路徑 print("采集結果文件:" + file_path)
啟動爬蟲,進入項目目錄E:\demo\fourth下,在此處打開命提示符窗口輸入命令:
E:\demo\fourth>scrapy crawl anjuke
注:網站若發現抓取時報重定向錯誤了,嘗試修改user-agent后,再啟動爬蟲爬取數據。操作步驟如下:
1、在爬蟲項目目錄(這里為E:\demo\fourth\fourth)下創建模塊文件middlewares.py,以記事本打開后,添加如下代碼:
#-*-coding:utf-8-*- # 隨機更換user agent import random from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware class RotateUserAgentMiddleware(UserAgentMiddleware): def __init__(self, user_agent=''): self.user_agent = user_agent def process_request(self, request, spider): ua = random.choice(self.user_agent_list) if ua: request.headers.setdefault('User-Agent', ua) user_agent_list = [\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ]
2、修改項目配置文件settings.py,加上如下代碼:
DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware':None, 'fourth.middlewares.RotateUserAgentMiddleware':400, }
查看保存結果文件,進入Scrapy爬蟲項目目錄,這里為E:\demo\fourth,找到名稱為anjuke-result.xml的文件夾然后打開
5. 總結
安裝pypiwin32時碰到了一次超時斷開,再次輸入命令重新安裝才成功,若重復安裝都失敗可以嘗試連接vpn再安裝。下一篇《Python爬蟲實戰:單頁采集》將講解如何爬取微博數據(單頁),同時整合Python爬蟲程序以Gooseeker規則提取器為接口制作一個通用的采集器,歡迎有興趣的小伙伴一起交流進步。
6. 集搜客GooSeeker開源代碼下載源
7.修改記錄
- 2017.03.02 補充報重定向錯誤解決方案