Scrapy框架


Scrapy

Scrapy是一個為了爬取網站數據,提取結構性數據而編寫的應用框架。 其可以應用在數據挖掘,信息處理或存儲歷史數據等一系列的程序中。
其最初是為了頁面抓取 (更確切來說, 網絡抓取 )所設計的, 也可以應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。Scrapy用途廣泛,可以用於數據挖掘、監測和自動化測試。

Scrapy 使用了 Twisted異步網絡庫來處理網絡通訊。整體架構大致如下

Scrapy主要包括了以下組件:

  • 引擎(Scrapy)
    用來處理整個系統的數據流處理, 觸發事務(框架核心)
  • 調度器(Scheduler)
    用來接受引擎發過來的請求, 壓入隊列中, 並在引擎再次請求的時候返回. 可以想像成一個URL(抓取網頁的網址或者說是鏈接)的優先隊列, 由它來決定下一個要抓取的網址是什么, 同時去除重復的網址
  • 下載器(Downloader)
    用於下載網頁內容, 並將網頁內容返回給蜘蛛(Scrapy下載器是建立在twisted這個高效的異步模型上的)
  • 爬蟲(Spiders)
    爬蟲是主要干活的, 用於從特定的網頁中提取自己需要的信息, 即所謂的實體(Item)。用戶也可以從中提取出鏈接,讓Scrapy繼續抓取下一個頁面
  • 項目管道(Pipeline)
    負責處理爬蟲從網頁中抽取的實體,主要的功能是持久化實體、驗證實體的有效性、清除不需要的信息。當頁面被爬蟲解析后,將被發送到項目管道,並經過幾個特定的次序處理數據。
  • 下載器中間件(Downloader Middlewares)
    位於Scrapy引擎和下載器之間的框架,主要是處理Scrapy引擎與下載器之間的請求及響應。
  • 爬蟲中間件(Spider Middlewares)
    介於Scrapy引擎和爬蟲之間的框架,主要工作是處理蜘蛛的響應輸入和請求輸出。
  • 調度中間件(Scheduler Middewares)
    介於Scrapy引擎和調度之間的中間件,從Scrapy引擎發送到調度的請求和響應。

Scrapy運行流程大概如下:

  1. 引擎從調度器中取出一個鏈接(URL)用於接下來的抓取
  2. 引擎把URL封裝成一個請求(Request)傳給下載器
  3. 下載器把資源下載下來,並封裝成應答包(Response)
  4. 爬蟲解析Response
  5. 解析出實體(Item),則交給實體管道進行進一步的處理
  6. 解析出的是鏈接(URL),則把URL交給調度器等待抓取

一、安裝

Linux:
    pip3 install scrapy 

Windows:
    pip3 install wheel
    D:twisted.wheel
    pip3 install D:twisted.wheel
    
    pip3 install scrapy 報錯:twisted安裝錯誤
    
    pywin32


PS: 
    - python3對twisted未完全支持
    - python2    對Scrapy支持更好些

import scrapy
View Code

二、基本使用

1. 基本命令

Django:
	django-admin startproject mysite
	cd mysite
	python manage.py startapp app01
	

Scrapy:
	# 創建項目,在當前目錄中創建中創建一個項目文件(類似於Django)
	scrapy startproject sp1
		生成目錄如下:
			sp1
				- sp1
					- spiders          目錄,放置創建的爬蟲應用
					- middlewares.py	中間件
					- items.py			格式化,與pipelines.py一同做持久化
					- pipelines.py		持久化
					- settings.py		配置文件
				- scrapy.cfg 			配置
		
	# 創建爬蟲應用
	cd sp1
	scrapy genspider xiaohuar xiaohuar.com		# 創建了xiaohuar.py
	scrapy genspider baidu baidu.com		# 創建了baidu.py
	
	# 展示爬蟲應用列表
	scrapy list

	# 執行爬蟲,進入project
	scrapy crawl baidu
	scrapy crawl baidu --nolog

文件說明:

  • scrapy.cfg  項目的主配置信息。(真正爬蟲相關的配置信息在settings.py文件中)
  • items.py    設置數據存儲模板,用於結構化數據,如:Django的Model
  • pipelines    數據處理行為,如:一般結構化的數據持久化
  • settings.py 配置文件,如:遞歸的層數、並發數,延遲下載等
  • spiders      爬蟲目錄,如:創建文件,編寫爬蟲規則

注意:一般創建爬蟲文件時,以網站域名命名

2. 基本操作

2.1  selector作篩選

hxs = Selector(response=response)
# print(hxs)
user_list = hxs.xpath('//div[@class="item masonry_brick"]')
for item in user_list:
    price = item.xpath('./span[@class="price"]/text()').extract_first()
    url = item.xpath('div[@class="item_t"]/div[@class="class"]//a/@href').extract_first()
    print(price,url)

result = hxs.xpath('/a[re:test(@href,"http://www.xiaohuar.com/list-1-\d+.html")]/@href')
print(result)
result = ['http://www.xiaohuar.com/list-1-1.html','http://www.xiaohuar.com/list-1-2.html']
View Code

2.2 yield Request(url=url,callback=self.parse)   # 迭代去執行

2.3 代碼的實現

# -*- coding: utf-8 -*-
import scrapy

class BaiduSpider(scrapy.Spider):
    name = 'baidu'                          # 爬蟲應用的名稱,通過此名稱啟動爬蟲命令
    allowed_domains = ['baidu.com']         # 允許的域名
    start_urls = ['http://baidu.com/']     # 起始URL

    def parse(self, response):
        print(response.text)
        print(response.body)
baidu.py
import scrapy
from scrapy.selector import HtmlXPathSelector,Selector
from scrapy.http import Request

class XiaohuarSpider(scrapy.Spider):
    name = 'xiaohuar'
    allowed_domains = ['xiaohuar.com']
    start_urls = ['http://www.xiaohuar.com/hua/']            # 起始url

    def parse(self, response):
        # 要廢棄
        # hxs = HtmlXPathSelector(response)     # 拿到的內容response轉換成對象
        # print(hxs)
        # result = hxs.select('//a[@class="item_list"]')        # select:表示查找;//a :是找頁面所有的a標簽
        ## result = hxs.select('//a[@class="item_list"]').extract()        # .extract()使返回的值result不是對象,而是列表[<a></a>,<a></a>...]
        ## result = hxs.select('//a[@class="item_list"]').extract_one()        # 拿第一個
        ## result = hxs.select('//a[@class="item_list"]/@href').extract_one()        # 表示拿href屬性
        ## result = hxs.select('//a[@class="item_list"]/text()').extract_one()        # 表示拿文本內容

        ############################# 以上寫法不推薦 #############################


        ############################### 推薦以下方式 ##############################

        hxs = Selector(response=response)
        # print(hxs)
        user_list = hxs.xpath('//div[@class="item masonry_brick"]')     # 拿到的是對象,但可以對這個對象進行循環。找到class="item masonry_brick"的所有div標簽
        for item in user_list:                                              # 每個item也是對象
            price = item.xpath('.//span[@class="price"]/text()').extract_first()     # 相對於當前標簽的找子子孫孫使用.//span...
            # price = item.xpath('//span[@class="price"]/text()').extract_first()是錯誤的,因為//span...是向整個html里找
            url = item.xpath('div[@class="item_t"]/div[@class="class"]//a/@href').extract_first()
            # / 表示去兒子里找,//表示到子子孫孫里找。但必須是在內部才有意義。最外層//和/ 都是有特殊意義
            print(price,url)
            
        # 上面找的只是第一頁索引的內容,下面找的是分頁的內容
        result = hxs.xpath('/a[re:test(@href,"http://www.xiaohuar.com/list-1-\d+.html")]/@href')    # re:test() 正則查找
        print(result)
        result = ['http://www.xiaohuar.com/list-1-1.html','http://www.xiaohuar.com/list-1-2.html']

        # 規則
        for url in result:
            yield Request(url=url,callback=self.parse)      # yield Request(url=url) 只是把url封裝起來放到調度器里了,callback=self.parse源源不斷的發請求,迭代去執行
xiaohuar.py

補充:

選擇器:
	//			# 子子孫孫
        /			# 兒子
        /@屬性名	# 取屬性
        /text()		# 取文本

	
特殊:
	item.xpath('./')	# 相對當前子孫中找
	item.xpath('a')		# 相對當前兒子中找            

  

三、深入了解

(一)以下內容 以登錄抽屜並點贊來舉例進行深入了解

1. 起始URL

不指明callback=self.parse1情況下,默認下載完后執行 parse函數

import scrapy
from scrapy.http import Request

class ChoutiSpider(scrapy.Spider):
	name = 'chouti'
	allowed_domains = ['chouti.com']
	start_urls = ['http://chouti.com/']

	def start_requests(self):       # 看源碼,如果我們沒有start_requests函數,默認會執行繼承的類scrapy.Spider里的start_requests方法
		for url in self.start_urls:
			yield Request(url, dont_filter=True,callback=self.parse1)       # dont_filter=True對爬取的url不去重

	def parse1(self, response):
		pass

 

2. 如何發POST請求,攜帶請求頭,cookie,數據

requests.get(params={},headers={},cookies={})
requests.post(params={},headers={},cookies={},data={},json={})

2.1 requests請求相關的參數

url, 
method='GET', 
headers=None, 
body=None,
cookies=None,
...
View Code

2.2 GET請求

url, 
method='GET', 
headers={}, 
cookies={}, cookiejar            # cookies可以是字典也可以是cookiejar對象
View Code

2.3 POST請求

url, 
method='GET', 
headers={}, 
cookies={}, cookiejar            # cookies可以是字典也可以是cookiejar對象
body=None,                        # 請求體
    請求頭application/x-www-form-urlencoded; charset=UTF-8格式下,數據"phone=86155fa&password=asdf&oneMonth=1" 
    請求頭json格式application/json; charset=UTF-8,數據時字典格式"{k1:'v1','k2':'v2'}"
    
    當請求頭application/x-www-form-urlencoded; charset=UTF-8格式下,form_data = {'user':'xyp','pwd': 123}需要for循環拼接成"user=xyp$pwd=123"
    但scrapy框架提供了模塊可以自動完成拼接
        import urllib.parse
        data = urllib.parse.urlencode({'k1':'v1','k2':'v2'})
        print(data)
        # ---> "k1=v1&k2=v2"  
         
        
    請求頭json格式application/json; charset=UTF-8格式下
        json.dumsp({k1:'v1','k2':'v2'})
        
        "{k1:'v1','k2':'v2'}"   
View Code

2.4 POST請求示例

 Request(
    url='http://dig.chouti.com/login',
    method='POST',
    headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
    body='phone=8615131255089&password=pppppppp&oneMonth=1',
    callback=self.check_login
)
View Code

2.5 cookie

Request(
    url='http://dig.chouti.com/login',
    method='POST',
    headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
    body='phone=8615131255089&password=pppppppp&oneMonth=1',
    cookies=self.cookie_dict,
    callback=self.check_login
)
View Code

具體代碼實現:

#以下代碼是循環不停的,加上去重操作
# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from scrapy.selector import Selector

class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['chouti.com']
    start_urls = ['http://chouti.com/']
    cookie_dict = {}
    """
    1. 發送一個GET請求,抽屜
       獲取cookie
       
    2. 用戶密碼POST登錄:攜帶上一次cookie
       返回值:9999表示登錄成功
       
    3. 為所欲為,攜帶cookie,點贊
    """
    def start_requests(self):       # 看源碼,如果我們沒有start_requests函數,默認會執行繼承的類scrapy.Spider里的start_requests方法
        for url in self.start_urls:
            yield Request(url, dont_filter=True,callback=self.parse1)       # dont_filter=True對爬取的url不去重

    def parse1(self,response):
        # response.text 首頁所有內容
        from scrapy.http.cookies import CookieJar
        cookie_jar = CookieJar() # 對象,對象中封裝了 cookies
        cookie_jar.extract_cookies(response, response.request) # 去響應中獲取cookies

        for k, v in cookie_jar._cookies.items():
            for i, j in v.items():
                for m, n in j.items():
                    self.cookie_dict[m] = n.value
        post_dict = {
            'phone': '8615131255089',
            'password': 'woshiniba',
            'oneMonth': 1,
        }
        import urllib.parse

        # 目的:發送POST進行登錄
        yield Request(
            url="http://dig.chouti.com/login",
            method='POST',
            cookies=self.cookie_dict,       # 或者cookies=self.cookie_jar 也行
            body=urllib.parse.urlencode(post_dict),     # 要發送的body數據
            headers={'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8'},
            callback=self.parse2                        # 回調函數
        )

    def parse2(self,response):
        print(response.text)        # 這里需根據response判斷是否登錄成功,此處省略判斷
        # 獲取新聞列表
        yield Request(url='http://dig.chouti.com/',cookies=self.cookie_dict,callback=self.parse3)

    def parse3(self,response):

        # 找div,class=part2, 獲取share-linkid屬性,得到文章id
        hxs = Selector(response)
        link_id_list = hxs.xpath('//div[@class="part2"]/@share-linkid').extract()       # 取到當前頁面所有的文章id
        print(link_id_list)
        for link_id in link_id_list:
            # 獲取每一個ID去點贊
            base_url = "http://dig.chouti.com/link/vote?linksId=%s" %(link_id,)
            yield Request(url=base_url,method="POST",cookies=self.cookie_dict,callback=self.parse4)


        #################### 以上只是把首頁文章全部點贊 ####################
        
        
        ####################### 分頁每個文章都點贊 ####################### 
        
        page_list = hxs.xpath('//div[@id="dig_lcpage"]//a/@href').extract()     # 拿到所有的頁碼
        for page in page_list:
            #page : /all/hot/recent/2
            page_url = "http://dig.chouti.com%s" %(page,)
            yield Request(url=page_url,method='GET',callback=self.parse3)       # 循環不同頁碼點贊

    def parse4(self, response):
        print(response.text)
自動登錄抽屜並點贊

 

 

(二)以下內容 以獲取煎蛋文章標題和url來舉例進行持久化的了解

3. 持久化

3.1 獲取煎蛋文章標題和url:具體代碼及持久化詳細注釋

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from scrapy.selector import Selector

class JianDanSpider(scrapy.Spider):
    name = 'jiandan'
    allowed_domains = ['jandan.net']
    start_urls = ['http://jandan.net/']

    def start_requests(self):
        for url in self.start_urls:
            yield Request(url, dont_filter=True,callback=self.parse1)
    def parse1(self,response):
        # response.text 首頁所有內容
        hxs = Selector(response)
        a_list = hxs.xpath('//div[@class="indexs"]/h2')
        for tag in a_list:
            url = tag.xpath('./a/@href').extract_first()
            text = tag.xpath('./a/text()').extract_first()
            from ..items import Sp2Item
            yield Sp2Item(url=url,text=text)        # 創建特殊的對象直接交給pipeline,沒有做持久化操作,只是把工作轉交給了pipeline
        #以上獲取的是首頁文章的文本和url
        # 獲取頁碼 [url,url]
        """
        for url in url_list:
            yield Request(url=url,callback=self.parse1)
        """
jiandan.py
import scrapy

class Sp2Item(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    url = scrapy.Field()
    text = scrapy.Field()
items.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


class Sp2Pipeline(object):
    def __init__(self):
        self.f = None

    def process_item(self, item, spider):
        """

        :param item:  爬蟲中yield回來的對象
        :param spider: 爬蟲對象 obj = JianDanSpider()
        :return:
        """
        if spider.name == 'jiadnan':
            pass
        print(item)
        self.f.write('....')
        # 將item傳遞給下一個pipeline的process_item方法
        # return item
        # from scrapy.exceptions import DropItem
        # raise DropItem()  下一個pipeline的process_item方法不在執行

    @classmethod
    def from_crawler(cls, crawler):
        """
        初始化時候,用於創建pipeline對象
        :param crawler:
        :return:
        """
        # val = crawler.settings.get('MMMM')
        print('執行pipeline的from_crawler,進行實例化對象')
        return cls()

    def open_spider(self,spider):
        """
        爬蟲開始執行時,調用
        :param spider:
        :return:
        """
        print('打開爬蟲')
        self.f = open('a.log','a+')

    def close_spider(self,spider):
        """
        爬蟲關閉時,被調用
        :param spider:
        :return:
        """
        self.f.close()
pipelines.py
ITEM_PIPELINES = {
           'sp2.pipelines.Sp2Pipeline': 300,        # 300是優先級
        }
settings.py

3.2 總結

① pipeline執行的前提

- spider中yield Item對象
- settings中注冊
	ITEM_PIPELINES = {
	   'sp2.pipelines.Sp2Pipeline': 300,		# 300為優先級,越小越先執行
	   'sp2.pipelines.Sp3Pipeline': 100,
	}

② 編寫pipeline

class Sp2Pipeline(object):
    def __init__(self):
        self.f = None

    def process_item(self, item, spider):
        """

        :param item:  爬蟲中yield回來的對象
        :param spider: 爬蟲對象 obj = JianDanSpider()
        :return:
        """
        print(item)
        self.f.write('....')
        return item
        # from scrapy.exceptions import DropItem
        # raise DropItem()  下一個pipeline的process_item方法不在執行

    @classmethod
    def from_crawler(cls, crawler):
        """
        初始化時候,用於創建pipeline對象
        :param crawler:
        :return:
        """
        # val = crawler.settings.get('MMMM')
        print('執行pipeline的from_crawler,進行實例化對象')
        return cls()

    def open_spider(self,spider):
        """
        爬蟲開始執行時,調用
        :param spider:
        :return:
        """
        print('打開爬蟲')
        self.f = open('a.log','a+')

    def close_spider(self,spider):
        """
        爬蟲關閉時,被調用
        :param spider:
        :return:
        """
        self.f.close()
View Code
當注冊Sp2Pipeline和Sp3Pipeline時,先執行優先級高的__init__函數初始化方法,from_crawler方法,open_spider方法。但是不繼續執行優先級高的爬蟲方法。
而是等優先級低的執行完__init__函數初始化方法,from_crawler方法,open_spider方法后才會執行爬蟲方法。

PipeLine是全局生效,所有爬蟲都會執行,個別做特殊操作: 通過spider.name判斷

③ pipelines.py可以自定義的方法,及程序運行順序 

# class CustomPipeline(object):
#     def __init__(self,val):
#         self.val = val
#
#     def process_item(self, item, spider):
#         # 操作並進行持久化
#
#         # return表示會被后續的pipeline繼續處理
#         return item
#
#         # 表示將item丟棄,不會被后續pipeline處理
#         # raise DropItem()
#
#     @classmethod
#     def from_crawler(cls, crawler):
#         """
#         初始化時候,用於創建pipeline對象
#         :param crawler:
#         :return:
#         """
#         val = crawler.settings.get('MMMM')
#         return cls(val)
#
#     def open_spider(self,spider):
#         """
#         爬蟲開始執行時,調用
#         :param spider:
#         :return:
#         """
#         print('000000')
#
#     def close_spider(self,spider):
#         """
#         爬蟲關閉時,被調用
#         :param spider:
#         :return:
#         """
#         print('111111')

"""
檢測 CustomPipeline類中是否有 from_crawler方法
如果有:
       obj = 類.from_crawler()
如果沒有:
       obj = 類()
obj.open_spider()

while True:
    爬蟲運行,並且執行parse各種各樣的爬蟲方法,yield item
    obj.process_item()

obj.close_spider()    

"""
View Code

以上以例子為了解的內容結束。

 

 

4. 自定義去重規則

4.1 配置文件中指定

scrapy默認使用 scrapy.dupefilter.RFPDupeFilter 進行去重,默認在settings相關配置有:
	DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
	DUPEFILTER_DEBUG = False
	JOBDIR = "保存范文記錄的日志路徑,如:/root/"  # 最終路徑為 /root/requests.seen

4.2 自定義URL去重操作

class RepeatUrl:
    def __init__(self):
        self.visited_url = set() # 放在當前服務的內存

    @classmethod
    def from_settings(cls, settings):
        """
        初始化時,調用
        :param settings:
        :return:
        """
        return cls()

    def request_seen(self, request):
        """
        檢測當前請求是否已經被訪問過
        :param request:
        :return: True表示已經訪問過;False表示未訪問過
        """
        if request.url in self.visited_url:
            return True
        self.visited_url.add(request.url)
        return False

    def open(self):
        """
        開始爬去請求時,調用
        :return:
        """
        print('open replication')

    def close(self, reason):
        """
        結束爬蟲爬取時,調用
        :param reason:
        :return:
        """
        print('close replication')

    def log(self, request, spider):
        """
        記錄日志
        :param request:
        :param spider:
        :return:
        """
        print('repeat', request.url)
rep.py
DUPEFILTER_CLASS = 'sp2.rep.RepeatUrl'
settings.py

 

 

5. 自定義擴展【基於信號】

from scrapy import signals

class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint('MMMM')
        ext = cls(val)

        # 在scrapy中注冊信號: spider_opened
        crawler.signals.connect(ext.opened, signal=signals.spider_opened)        # ext.opened觸發信號時執行的函數 
                
        # 在scrapy中注冊信號: spider_closed
        crawler.signals.connect(ext.closed, signal=signals.spider_closed)
        
        return ext

    def opened(self, spider):
        print('open')

    def closed(self, spider):
        print('close')
extends.py
EXTENSIONS = {
   # 'scrapy.extensions.telnet.TelnetConsole': None,
}
settings.py注冊

 

6. 中間件

6.1 爬蟲中間件

SPIDER_MIDDLEWARES = {
   'sp3.middlewares.Sp3SpiderMiddleware': 543,
}
settings.py注冊
class Sp3SpiderMiddleware(object):

    def process_spider_input(self,response, spider):
        """
        下載完成,執行,然后交給parse處理
        :param response: 
        :param spider: 
        :return: 
        """
        pass

    def process_spider_output(self,response, result, spider):
        """
        spider處理完成,返回時調用
        :param response:
        :param result:
        :param spider:
        :return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
        """
        return result

    def process_spider_exception(self,response, exception, spider):
        """
        異常調用
        :param response:
        :param exception:
        :param spider:
        :return: None,繼續交給后續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
        """
        return None


    def process_start_requests(self,start_requests, spider):
        """
        爬蟲啟動時調用
        :param start_requests:
        :param spider:
        :return: 包含 Request 對象的可迭代對象
        """
        return start_requests
middlewares.py

6.2 下載中間件

DOWNLOADER_MIDDLEWARES = {
   'sp3.middlewares.DownMiddleware1': 543,
}
settings.py注冊
class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        請求需要被下載時,經過所有下載器中間件的process_request調用
        :param request: 
        :param spider: 
        :return:  
            None,繼續后續中間件去下載;
            Response對象,停止process_request的執行,開始執行process_response
            Request對象,停止中間件的執行,將Request重新調度器
            raise IgnoreRequest異常,停止process_request的執行,開始執行process_exception
        """
        
        
        """
        from scrapy.http import Request
        # print(request)
        # request.method = "POST"
        request.headers['proxy'] = "{'ip_port': '111.11.228.75:80', 'user_pass': ''},"
        return None
        """
        
        
        """
        from scrapy.http import Response
        import requests
        v = request.get('http://www.baidu.com')
        data = Response(url='xxxxxxxx',body=v.content,request=request)
        return data
         """
        
        
        pass



    def process_response(self, request, response, spider):
        """
        spider處理完成,返回時調用
        :param response:
        :param result:
        :param spider:
        :return: 
            Response 對象:轉交給其他中間件process_response
            Request 對象:停止中間件,request會被重新調度下載
            raise IgnoreRequest 異常:調用Request.errback
        """
        print('response1')
        return response

    def process_exception(self, request, exception, spider):
        """
        當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
        :param response:
        :param exception:
        :param spider:
        :return: 
            None:繼續交給后續中間件處理異常;
            Response對象:停止后續process_exception方法
            Request對象:停止中間件,request將會被重新調用下載
        """
        return None
middlewares.py

 

 

7. 自定義命令【scrapy crawl baidu看源碼的入口】

在spiders同級創建任意目錄,如:commands
在其中創建 crawlall.py 文件 (此處文件名就是自定義的命令)
class Command(ScrapyCommand):

    requires_project = True

    def syntax(self):
        return '[options]'

    def short_desc(self):
        return 'Runs all of the spiders'

    def run(self, args, opts):
        # 爬蟲列表
        spider_list = self.crawler_process.spiders.list()
        for name in spider_list:
            print(name)                                                # #
            # 初始化爬蟲
            self.crawler_process.crawl(name, **opts.__dict__)
        # 開始執行所有的爬蟲
        self.crawler_process.start()
crawlall.py
在settings.py 中添加配置 COMMANDS_MODULE = '項目名稱.目錄名稱'
在項目目錄執行命令:scrapy crawlall 
		
就多了命令:scrapy crawlall		
scrapy crawlall	--nolog	 	#---> xxx
scrapy genspider ooo ooo.com
scrapy crawlall	--nolog	 	
'''
	---> xxx
		 ooo
'''

 

 

 

8. 其他(scrapy配置文件)

# -*- coding: utf-8 -*-

# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

# 1. 爬蟲名稱
BOT_NAME = 'step8_king'    


# 2. 爬蟲應用路徑
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客戶端 user-agent請求頭                
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'                # user-agent客戶端設備


# Obey robots.txt rules
# 4. 禁止爬蟲配置
# ROBOTSTXT_OBEY = False            # 是否遵循爬蟲協議                    


# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 並發請求數
# CONCURRENT_REQUESTS = 4


# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延遲下載秒數
# DOWNLOAD_DELAY = 2


# The download delay setting will honor only one of:        # 如果設置單域名訪問並發數和單IP訪問並發數會把第五條並發請求數覆蓋
# 7. 單域名訪問並發數,並且延遲下次秒數也應用在每個域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 單IP訪問並發數,如果有值則忽略:CONCURRENT_REQUESTS_PER_DOMAIN,並且延遲下次秒數也應用在每個IP
# CONCURRENT_REQUESTS_PER_IP = 3


# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar進行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True


# Disable Telnet Console (enabled by default)
# 9. Telnet用於查看當前爬蟲的信息,操作爬蟲等...            # 對於你的爬蟲進行監控
#    使用telnet ip port ,然后通過命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]


# 10. 默認請求頭,設置所有的請求頭,但是優先級比較低,在爬蟲名.py文件中設置請求頭優先級高一些
# Override the default request headers:    
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }


# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定義pipeline處理請求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }



# 12. 自定義擴展,基於信號進行調用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }


# 13. 爬蟲允許的最大深度,可以通過meta查看當前深度;0表示無深度
# DEPTH_LIMIT = 3


# 14. 爬取時,0表示深度優先Lifo(默認);1表示廣度優先FiFo

# 后進先出,深度優先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先進先出,廣度優先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'


# 15. 調度器隊列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'        # scrapy框架默認的調度器,與14條隊列結合
# from scrapy.core.scheduler import Scheduler


# 16. 訪問URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'


# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html

"""
17. 自動限速算法
    from scrapy.contrib.throttle import AutoThrottle
    自動限速設置
    1. 獲取最小延遲 DOWNLOAD_DELAY
    2. 獲取最大延遲 AUTOTHROTTLE_MAX_DELAY
    3. 設置初始下載延遲 AUTOTHROTTLE_START_DELAY
    4. 當請求下載完成后,獲取其"連接"時間 latency,即:請求連接到接受到響應頭之間的時間
    5. 用於計算的... AUTOTHROTTLE_TARGET_CONCURRENCY
    target_delay = latency / self.target_concurrency
    new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延遲時間
    new_delay = max(target_delay, new_delay)
    new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
    slot.delay = new_delay
"""

# 開始自動限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下載延遲
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下載延遲
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒並發數
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:
# 是否顯示
# AUTOTHROTTLE_DEBUG = True

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings


"""
18. 啟用緩存
    目的用於將已經發送的請求或相應緩存下來,以便以后使用
    
    from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
    from scrapy.extensions.httpcache import DummyPolicy
    from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否啟用緩存策略
# HTTPCACHE_ENABLED = True

# 緩存策略:所有請求均緩存,下次在請求直接訪問原來的緩存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 緩存超時時間
# HTTPCACHE_EXPIRATION_SECS = 0

# 緩存保存路徑
# HTTPCACHE_DIR = 'httpcache'

# 緩存忽略的Http狀態碼
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 緩存存儲的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


"""
19. 代理,需要在環境變量中設置
    from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
    
    方式一:使用默認
        os.environ
        {
            http_proxy:http://root:woshiniba@192.168.11.11:9999/
            https_proxy:http://192.168.11.11:9999/
        }
    方式二:使用自定義下載中間件
    
    def to_bytes(text, encoding=None, errors='strict'):
        if isinstance(text, bytes):
            return text
        if not isinstance(text, six.string_types):
            raise TypeError('to_bytes must receive a unicode, str or bytes '
                            'object, got %s' % type(text).__name__)
        if encoding is None:
            encoding = 'utf-8'
        return text.encode(encoding, errors)
        
    class ProxyMiddleware(object):
        def process_request(self, request, spider):
            PROXIES = [
                {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
            ]
            proxy = random.choice(PROXIES)
            if proxy['user_pass'] is not None:
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                print "**************ProxyMiddleware have pass************" + proxy['ip_port']
            else:
                print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
    
    DOWNLOADER_MIDDLEWARES = {
       'step8_king.middlewares.ProxyMiddleware': 500,
    }
    
"""



"""
20. Https訪問
    Https訪問時有兩種情況:
    1. 要爬取網站使用的可信任證書(默認支持)
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
        
    2. 要爬取網站使用的自定義證書
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
        
        # https.py
        from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
        
        class MySSLFactory(ScrapyClientContextFactory):
            def getCertificateOptions(self):
                from OpenSSL import crypto
                v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/xyp/client.key.unsecure', mode='r').read())
                v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/xyp/client.pem', mode='r').read())
                return CertificateOptions(
                    privateKey=v1,  # pKey對象
                    certificate=v2,  # X509對象
                    verify=False,
                    method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                )
    其他:
        相關類
            scrapy.core.downloader.handlers.http.HttpDownloadHandler
            scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
            scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
        相關配置
            DOWNLOADER_HTTPCLIENTFACTORY
            DOWNLOADER_CLIENTCONTEXTFACTORY

"""



"""
21. 爬蟲中間件
    class SpiderMiddleware(object):

        def process_spider_input(self,response, spider):
            '''
            下載完成,執行,然后交給parse處理
            :param response: 
            :param spider: 
            :return: 
            '''
            pass
    
        def process_spider_output(self,response, result, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
            '''
            return result
    
        def process_spider_exception(self,response, exception, spider):
            '''
            異常調用
            :param response:
            :param exception:
            :param spider:
            :return: None,繼續交給后續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
            '''
            return None
    
    
        def process_start_requests(self,start_requests, spider):
            '''
            爬蟲啟動時調用
            :param start_requests:
            :param spider:
            :return: 包含 Request 對象的可迭代對象
            '''
            return start_requests
    
    內置爬蟲中間件:
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,

"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   # 'step8_king.middlewares.SpiderMiddleware': 543,
}


"""
22. 下載中間件
    class DownMiddleware1(object):
        def process_request(self, request, spider):
            '''
            請求需要被下載時,經過所有下載器中間件的process_request調用
            :param request:
            :param spider:
            :return:
                None,繼續后續中間件去下載;
                Response對象,停止process_request的執行,開始執行process_response
                Request對象,停止中間件的執行,將Request重新調度器
                raise IgnoreRequest異常,停止process_request的執行,開始執行process_exception
            '''
            pass
    
    
    
        def process_response(self, request, response, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 對象:轉交給其他中間件process_response
                Request 對象:停止中間件,request會被重新調度下載
                raise IgnoreRequest 異常:調用Request.errback
            '''
            print('response1')
            return response
    
        def process_exception(self, request, exception, spider):
            '''
            當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:繼續交給后續中間件處理異常;
                Response對象:停止后續process_exception方法
                Request對象:停止中間件,request將會被重新調用下載
            '''
            return None

    
    默認下載中間件
    {
        'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
        'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
        'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
        'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
        'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
        'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
        'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
        'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
        'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
        'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
        'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
    }

"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }
settings.py

 

以上內容目錄結構如下:

 

 

 

四、自己寫自己寫TinyScrapy框架

1. Twisted使用

# 一個線程並發操作

from twisted.web.client import getPage, defer
from twisted.internet import reactor


def all_done(arg):                            # 所有爬蟲執行完后,循環終止
    reactor.stop()

def callback(contents):                        # 每一個爬蟲獲取結果后自動執行
    print(contents)


deferred_list = []

url_list = ['http://www.bing.com', 'http://www.baidu.com', ]
for url in url_list:
    deferred = getPage(bytes(url, encoding='utf8'))
    deferred.addCallback(callback)
    deferred_list.append(deferred)

dlist = defer.DeferredList(deferred_list)
dlist.addBoth(all_done)

reactor.run()                                # 事件循環
s1.py:基本使用
from twisted.web.client import getPage, defer
from twisted.internet import reactor

def all_done(arg):
    reactor.stop()

def onedone(response):
    print(response)


@defer.inlineCallbacks
def task(url):
    deferred = getPage(bytes(url, encoding='utf8'))
    deferred.addCallback(onedone)
    yield deferred


deferred_list = []

url_list = ['http://www.bing.com', 'http://www.baidu.com', ]
for url in url_list:
    deferred = task(url)        # 把下面注釋掉的二行代碼寫入到函數take中
    # deferred = getPage(url)
    # deferred.addCallback(onedone)
    deferred_list.append(deferred)

dlist = defer.DeferredList(deferred_list)
dlist.addBoth(all_done)

reactor.run()
s2.py:基於裝飾器(一)
from twisted.web.client import getPage, defer
from twisted.internet import reactor

def all_done(arg):
    reactor.stop()


def onedone(response):
    print(response)


@defer.inlineCallbacks
def task():                                                                    # task函數中執行第一個yield,第二個yield也會執行
    deferred2 = getPage(bytes("http://www.baidu.com", encoding='utf8'))
    deferred2.addCallback(onedone)
    yield deferred2                                                            


    deferred1 = getPage(bytes("http://www.google.com", encoding='utf8'))
    deferred1.addCallback(onedone)
    yield deferred1


ret = task()
ret.addBoth(all_done)        # task中二個yield都執行完了會執行all_done

reactor.run()
s3.py:基於裝飾器(二)
from twisted.web.client import getPage, defer
from twisted.internet import reactor

def all_done(arg):
    reactor.stop()


def onedone(response):
    print(response)


@defer.inlineCallbacks
def task():
    deferred2 = getPage(bytes("http://www.baidu.com", encoding='utf8'))
    deferred2.addCallback(onedone)                    
    yield deferred2

    stop_deferred = defer.Deferred()        # 創建了空的Deferred對象,永遠夯住不終止
    # stop_deferred.callback(None)            # 人為中止調用callback
    yield stop_deferred


ret = task()
ret.addBoth(all_done)

reactor.run()                # 事件循環不停的運行不終止
s4.py:基於裝飾器,永恆循環
# reactor.callLater(0) # 結束當前Deferred,事件循環也會終止

from twisted.web.client import getPage, defer
from twisted.internet import reactor

running_list = []
stop_deferred = None

def all_done(arg):
    reactor.stop()

def onedone(response,url):
    print(response)
    running_list.remove(url)

def check_empty(response):
    if not running_list:
        stop_deferred.callback(None)

@defer.inlineCallbacks
def open_spider(url):
    deferred2 = getPage(bytes(url, encoding='utf8'))
    deferred2.addCallback(onedone, url)
    deferred2.addCallback(check_empty)
    yield deferred2

@defer.inlineCallbacks
def stop(url):
    global stop_deferred
    stop_deferred = defer.Deferred()
    yield stop_deferred

@defer.inlineCallbacks
def task(url):
    yield open_spider(url)
    yield stop(url)


running_list.append("http://www.baidu.com")
ret = task("http://www.baidu.com")
ret.addBoth(all_done)

reactor.run()
s5.py:基於裝飾器,執行完畢后停止事件循環
from twisted.web.client import getPage, defer
from twisted.internet import reactor

class ExecutionEngine(object):
    def __init__(self):
        self.stop_deferred = None
        self.running_list = []

    def onedone(self,response,url):
        print(response)
        self.running_list.remove(url)

    def check_empty(self,response):
        if not self.running_list:
            self.stop_deferred.callback(None)

    @defer.inlineCallbacks
    def open_spider(self,url):
        deferred2 = getPage(bytes(url, encoding='utf8'))
        deferred2.addCallback(self.onedone, url)
        deferred2.addCallback(self.check_empty)
        yield deferred2

    @defer.inlineCallbacks
    def stop(self,url):
        self.stop_deferred = defer.Deferred()
        yield self.stop_deferred

@defer.inlineCallbacks
def task(url):
    engine = ExecutionEngine()
    engine.running_list.append(url)

    yield engine.open_spider(url)
    yield engine.stop(url)

def all_done(arg):
    reactor.stop()

if __name__ == '__main__':

    ret = task("http://www.baidu.com")
    ret.addBoth(all_done)

    reactor.run()
s6.py:基於裝飾器,執行完畢后停止事件循環,將 s5.py封裝成類

2. 模擬scrapy

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from twisted.web.client import getPage, defer
from twisted.internet import reactor
import queue


class Request(object):
    def __init__(self, url, callback):
        self.url = url
        self.callback = callback


class Scheduler(object):
    def __init__(self, engine):
        self.q = queue.Queue()
        self.engine = engine

    def enqueue_request(self, request):
        self.q.put(request)

    def next_request(self):
        try:
            req = self.q.get(block=False)
        except Exception as e:
            req = None

        return req

    def size(self):
        return self.q.qsize()


class ExecutionEngine(object):
    def __init__(self):
        self._closewait = None
        self.running = True
        self.start_requests = None
        self.scheduler = Scheduler(self)

        self.inprogress = set()

    def check_empty(self, response):
        if not self.running:
            self._closewait.callback('......')

    def _next_request(self):
        while self.start_requests:
            try:
                request = next(self.start_requests)
            except StopIteration:
                self.start_requests = None
            else:
                self.scheduler.enqueue_request(request)

        print(len(self.inprogress), self.scheduler.size())
        while len(self.inprogress) < 5 and self.scheduler.size() > 0:  # 最大並發數為5

            request = self.scheduler.next_request()
            if not request:
                break

            self.inprogress.add(request)
            d = getPage(bytes(request.url, encoding='utf-8'))
            d.addBoth(self._handle_downloader_output, request)
            d.addBoth(lambda x, req: self.inprogress.remove(req), request)
            d.addBoth(lambda x: self._next_request())
        
        if len(self.inprogress) == 0 and self.scheduler.size() == 0:
            self._closewait.callback(None)

    def _handle_downloader_output(self, response, request):
        """
        獲取內容,執行回調函數,並且把回調函數中的返回值獲取,並添加到隊列中
        :param response: 
        :param request: 
        :return: 
        """
        import types

        gen = request.callback(response)
        if isinstance(gen, types.GeneratorType):
            for req in gen:
                self.scheduler.enqueue_request(req)

    @defer.inlineCallbacks
    def start(self):
        self._closewait = defer.Deferred()
        yield self._closewait

    @defer.inlineCallbacks
    def open_spider(self, start_requests):
        self.start_requests = start_requests
        yield None
        reactor.callLater(0, self._next_request)


@defer.inlineCallbacks
def crawl(start_requests):
    engine = ExecutionEngine()

    start_requests = iter(start_requests)
    yield engine.open_spider(start_requests)
    yield engine.start()


def _stop_reactor(_=None):
    reactor.stop()


def parse(response):
    for i in range(10):
        yield Request("http://dig.chouti.com/all/hot/recent/%s" % i, callback)

if __name__ == '__main__':
    start_requests = [Request("http://www.baidu.com", parse),Request("http://www.baidu1.com", parse),]


    ret = crawl(start_requests)
    
    ret.addBoth(_stop_reactor)

    reactor.run()
模擬scrapy.py簡易版
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from twisted.web.client import getPage, defer
from twisted.internet import reactor
import queue


class Response(object):
    def __init__(self, body, request):
        self.body = body
        self.request = request
        self.url = request.url

    @property
    def text(self):
        return self.body.decode('utf-8')


class Request(object):                                    # 封裝url調度器
    def __init__(self, url, callback=None):
        self.url = url
        self.callback = callback


class Scheduler(object):                                # 調度器
    def __init__(self, engine):
        self.q = queue.Queue()
        self.engine = engine

    def enqueue_request(self, request):
        self.q.put(request)

    def next_request(self):
        try:
            req = self.q.get(block=False)
        except Exception as e:
            req = None

        return req

    def size(self):
        return self.q.qsize()


class ExecutionEngine(object):
    def __init__(self):
        self._closewait = None
        self.running = True
        self.start_requests = None
        self.scheduler = Scheduler(self)

        self.inprogress = set()

    def check_empty(self, response):
        if not self.running:
            self._closewait.callback('......')

    def _next_request(self):
        while self.start_requests:
            try:
                request = next(self.start_requests)
            except StopIteration:
                self.start_requests = None
            else:
                self.scheduler.enqueue_request(request)

        while len(self.inprogress) < 5 and self.scheduler.size() > 0:  # 最大並發數為5

            request = self.scheduler.next_request()
            if not request:
                break

            self.inprogress.add(request)
            d = getPage(bytes(request.url, encoding='utf-8'))
            d.addBoth(self._handle_downloader_output, request)
            d.addBoth(lambda x, req: self.inprogress.remove(req), request)
            d.addBoth(lambda x: self._next_request())

        if len(self.inprogress) == 0 and self.scheduler.size() == 0:
            self._closewait.callback(None)

    def _handle_downloader_output(self, body, request):
        """
        獲取內容,執行回調函數,並且把回調函數中的返回值獲取,並添加到隊列中
        :param response: 
        :param request: 
        :return: 
        """
        import types

        response = Response(body, request)
        func = request.callback or self.spider.parse
        gen = func(response)
        if isinstance(gen, types.GeneratorType):
            for req in gen:
                self.scheduler.enqueue_request(req)

    @defer.inlineCallbacks
    def start(self):
        self._closewait = defer.Deferred()
        yield self._closewait

    @defer.inlineCallbacks
    def open_spider(self, spider, start_requests):
        self.start_requests = start_requests
        self.spider = spider
        yield None
        reactor.callLater(0, self._next_request)


class Crawler(object):
    def __init__(self, spidercls):
        self.spidercls = spidercls

        self.spider = None
        self.engine = None

    @defer.inlineCallbacks
    def crawl(self):
        self.engine = ExecutionEngine()
        self.spider = self.spidercls()
        start_requests = iter(self.spider.start_requests())
        yield self.engine.open_spider(self.spider, start_requests)
        yield self.engine.start()


class CrawlerProcess(object):
    def __init__(self):
        self._active = set()
        self.crawlers = set()

    def crawl(self, spidercls, *args, **kwargs):
        crawler = Crawler(spidercls)
        self.crawlers.add(crawler)
        
        d = crawler.crawl(*args, **kwargs)
        self._active.add(d)
        return d

    def start(self):
        dl = defer.DeferredList(self._active)
        dl.addBoth(self._stop_reactor)
        reactor.run()

    def _stop_reactor(self, _=None):
        reactor.stop()


class Spider(object):
    def start_requests(self):
        for url in self.start_urls:
            yield Request(url)


class ChoutiSpider(Spider):
    name = "chouti"
    start_urls = [
        'http://dig.chouti.com/',
    ]

    def parse(self, response):
        print(response.text)


class CnblogsSpider(Spider):
    name = "cnblogs"
    start_urls = [
        'http://www.cnblogs.com/',
    ]

    def parse(self, response):
        print(response.text)


if __name__ == '__main__':

    spider_cls_list = [ChoutiSpider, CnblogsSpider]

    crawler_process = CrawlerProcess()
    for spider_cls in spider_cls_list:
        crawler_process.crawl(spider_cls)

    crawler_process.start()
模擬scrapy.py

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM