Scrapy:python3下的第一次運行測試


1,引言

Scrapy的架構初探》一文講解了Scrapy的架構,本文就實際來安裝運行一下Scrapy爬蟲。本文以官網的tutorial作為例子,完整的代碼可以在github上下載。

2,運行環境配置

  • 本次測試的環境是:Windows10, Python3.4.3 32bit
  • 安裝Scrapy :   $ pip install Scrapy                 #實際安裝時,由於服務器狀態的不穩定,出現好幾次中途退出的情況


3,編寫運行第一個Scrapy爬蟲

3.1. 生成一個新項目:tutorial

$ scrapy startproject tutorial


項目目錄結構如下:



3.2.  定義要抓取的item

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class DmozItem(scrapy.Item): title = scrapy.Field() link = scrapy.Field() desc = scrapy.Field()


3.3. 定義Spider

import scrapy
from tutorial.items import DmozItem class DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): for sel in response.xpath('//ul/li'): item = DmozItem() item['title'] = sel.xpath('a/text()').extract() item['link'] = sel.xpath('a/@href').extract() item['desc'] = sel.xpath('text()').extract() yield item


3.4. 運行

$ scrapy crawl dmoz -o item.json


1) 結果報錯: 
   A) ImportError: cannot import name '_win32stdio'
   B) ImportError: No module named 'win32api'

2) 查錯過程:查看官方的FAQstackoverflow上的信息,原來是scrapy在python3上測試還不充分,還有小問題。

3) 解決過程:
   A) 需要手工去下載twisted/internet下的 _win32stdio 和 _pollingfile,存放到python目錄的lib\sitepackages\twisted\internet下
   B) 下載並安裝pywin32

再次運行,成功!在控制台上可以看到scrapy的輸出信息,待運行完成退出后,到項目目錄打開結果文件items.json, 可以看到里面以json格式存儲的爬取結果

[
{"title": ["        About       "], "desc": [" ", " "], "link": ["/docs/en/about.html"]}, {"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]}, {"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]}, {"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]}, {"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]}, {"title": [], "desc": [" ", " Share via Facebook "], "link": []}, {"title": [], "desc": [" ", " Share via Twitter "], "link": []}, {"title": [], "desc": [" ", " Share via LinkedIn "], "link": []}, {"title": [], "desc": [" ", " Share via e-Mail "], "link": []}, {"title": [], "desc": [" ", " "], "link": []}, {"title": [], "desc": [" ", " "], "link": []}, {"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]}, {"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]}, {"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]}, {"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]}, {"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]}, {"title": [], "desc": [" ", " Share via Facebook "], "link": []}, {"title": [], "desc": [" ", " Share via Twitter "], "link": []}, {"title": [], "desc": [" ", " Share via LinkedIn "], "link": []}, {"title": [], "desc": [" ", " Share via e-Mail "], "link": []}, {"title": [], "desc": [" ", " "], "link": []}, {"title": [], "desc": [" ", " "], "link": []} ]

第一次運行scrapy的測試成功

4,接下來的工作

接下來,我們將使用GooSeeker API來實現網絡爬蟲,省掉對每個item人工去生成和測試xpath的工作量。目前有2個計划:
  • 在gsExtractor中封裝一個方法:從xslt內容中自動提取每個item的xpath
  • 從gsExtractor的提取結果中自動提取每個item的結果
具體選擇哪個方案,將在接下來的實驗中確定,並發布到 gsExtractor新版本中

 

5,文檔修改歷史
 
2016-06-17:V1.0,首次發布


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2026 CODEPRJ.COM