爬蟲性能相關


這里我們通過請求網頁例子來一步步理解爬蟲性能

當我們有一個列表存放了一些url需要我們獲取相關數據,我們首先想到的是循環

簡單的循環串行

這一種方法相對來說是最慢的,因為一個一個循環,耗時是最長的,是所有的時間總和
代碼如下:

import requests

url_list = [
    'http://www.baidu.com',
    'http://www.pythonsite.com',
    'http://www.cnblogs.com/'
]

for url in url_list:
    result = requests.get(url)
    print(result.text)

通過線程池

通過線程池的方式訪問,這樣整體的耗時是所有連接里耗時最久的那個,相對循環來說快了很多

import requests
from concurrent.futures import ThreadPoolExecutor

def fetch_request(url):
    result = requests.get(url)
    print(result.text)

url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]
pool = ThreadPoolExecutor(10)

for url in url_list:
    #去線程池中獲取一個線程,線程去執行fetch_request方法
    pool.submit(fetch_request,url)

pool.shutdown(True)

線程池+回調函數

這里定義了一個回調函數callback

from concurrent.futures import ThreadPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)

    return response


def callback(future):
    print(future.result().text)


url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]

pool = ThreadPoolExecutor(5)

for url in url_list:
    v = pool.submit(fetch_async,url)
    #這里調用回調函數
    v.add_done_callback(callback)

pool.shutdown()

通過進程池

通過進程池的方式訪問,同樣的也是取決於耗時最長的,但是相對於線程來說,進程需要耗費更多的資源,同時這里是訪問url時IO操作,所以這里線程池比進程池更好

import requests
from concurrent.futures import ProcessPoolExecutor

def fetch_request(url):
    result = requests.get(url)
    print(result.text)

url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]
pool = ProcessPoolExecutor(10)

for url in url_list:
    #去進程池中獲取一個線程,子進程程去執行fetch_request方法
    pool.submit(fetch_request,url)

pool.shutdown(True)

進程池+回調函數

這種方式和線程+回調函數的效果是一樣的,相對來說開進程比開線程浪費資源

from concurrent.futures import ProcessPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)

    return response


def callback(future):
    print(future.result().text)


url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]

pool = ProcessPoolExecutor(5)

for url in url_list:
    v = pool.submit(fetch_async, url)
    # 這里調用回調函數
    v.add_done_callback(callback)

pool.shutdown()

主流的單線程實現並發的幾種方式

  1. asyncio
  2. gevent
  3. Twisted
  4. Tornado

下面分別是這四種代碼的實現例子:

asyncio例子1:

import asyncio


@asyncio.coroutine #通過這個裝飾器裝飾
def func1():
    print('before...func1......')
    # 這里必須用yield from,並且這里必須是asyncio.sleep不能是time.sleep
    yield from asyncio.sleep(2)
    print('end...func1......')


tasks = [func1(), func1()]

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
View Code

上述的效果是同時會打印兩個before的內容,然后等待2秒打印end內容
這里asyncio並沒有提供我們發送http請求的方法,但是我們可以在yield from這里構造http請求的方法。

asyncio例子2:

import asyncio


@asyncio.coroutine
def fetch_async(host, url='/'):
    print("----",host, url)
    reader, writer = yield from asyncio.open_connection(host, 80)

    #構造請求頭內容
    request_header_content = """GET %s HTTP/1.0\r\nHost: %s\r\n\r\n""" % (url, host,)
    request_header_content = bytes(request_header_content, encoding='utf-8')
    #發送請求
    writer.write(request_header_content)
    yield from writer.drain()
    text = yield from reader.read()
    print(host, url, text)
    writer.close()

tasks = [
    fetch_async('www.cnblogs.com', '/zhaof/'),
    fetch_async('dig.chouti.com', '/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
View Code

asyncio + aiohttp 代碼例子:

import aiohttp
import asyncio


@asyncio.coroutine
def fetch_async(url):
    print(url)
    response = yield from aiohttp.request('GET', url)
    print(url, response)
    response.close()


tasks = [fetch_async('http://baidu.com/'), fetch_async('http://www.chouti.com/')]

event_loop = asyncio.get_event_loop()
results = event_loop.run_until_complete(asyncio.gather(*tasks))
event_loop.close()
View Code

asyncio+requests代碼例子

import asyncio
import requests


@asyncio.coroutine
def fetch_async(func, *args):
    loop = asyncio.get_event_loop()
    future = loop.run_in_executor(None, func, *args)
    response = yield from future
    print(response.url, response.content)


tasks = [
    fetch_async(requests.get, 'http://www.cnblogs.com/wupeiqi/'),
    fetch_async(requests.get, 'http://dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
View Code

gevent+requests代碼例子

import gevent

import requests
from gevent import monkey

monkey.patch_all()


def fetch_async(method, url, req_kwargs):
    print(method, url, req_kwargs)
    response = requests.request(method=method, url=url, **req_kwargs)
    print(response.url, response.content)

# ##### 發送請求 #####
gevent.joinall([
    gevent.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}),
    gevent.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}),
    gevent.spawn(fetch_async, method='get', url='https://github.com/', req_kwargs={}),
])

# ##### 發送請求(協程池控制最大協程數量) #####
# from gevent.pool import Pool
# pool = Pool(None)
# gevent.joinall([
#     pool.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}),
#     pool.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}),
#     pool.spawn(fetch_async, method='get', url='https://www.github.com/', req_kwargs={}),
# ])
View Code

grequests代碼例子
這個是講requests+gevent進行了封裝

import grequests


request_list = [
    grequests.get('http://httpbin.org/delay/1', timeout=0.001),
    grequests.get('http://fakedomain/'),
    grequests.get('http://httpbin.org/status/500')
]


# ##### 執行並獲取響應列表 #####
# response_list = grequests.map(request_list)
# print(response_list)


# ##### 執行並獲取響應列表(處理異常) #####
# def exception_handler(request, exception):
# print(request,exception)
#     print("Request failed")

# response_list = grequests.map(request_list, exception_handler=exception_handler)
# print(response_list)
View Code

twisted代碼例子

#getPage相當於requets模塊,defer特殊的返回值,rector是做事件循環
from twisted.web.client import getPage, defer
from twisted.internet import reactor

def all_done(arg):
    reactor.stop()

def callback(contents):
    print(contents)

deferred_list = []

url_list = ['http://www.bing.com', 'http://www.baidu.com', ]
for url in url_list:
    deferred = getPage(bytes(url, encoding='utf8'))
    deferred.addCallback(callback)
    deferred_list.append(deferred)
#這里就是進就行一種檢測,判斷所有的請求知否執行完畢
dlist = defer.DeferredList(deferred_list)
dlist.addBoth(all_done)

reactor.run()
View Code

tornado代碼例子

from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from tornado import ioloop


def handle_response(response):
    """
    處理返回值內容(需要維護計數器,來停止IO循環),調用 ioloop.IOLoop.current().stop()
    :param response: 
    :return: 
    """
    if response.error:
        print("Error:", response.error)
    else:
        print(response.body)


def func():
    url_list = [
        'http://www.baidu.com',
        'http://www.bing.com',
    ]
    for url in url_list:
        print(url)
        http_client = AsyncHTTPClient()
        http_client.fetch(HTTPRequest(url), handle_response)


ioloop.IOLoop.current().add_callback(func)
ioloop.IOLoop.current().start()
View Code

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM