scrapy框架設置代理ip,headers頭和cookies


【設置代理ip】

根據最新的scrapy官方文檔,scrapy爬蟲框架的代理配置有以下兩種方法:

一.使用中間件DownloaderMiddleware進行配置
使用Scrapy默認方法scrapy startproject創建項目后項目目錄結構如下,spider中的crawler是已經寫好的爬蟲程序:
 
settings.py文件其中的DOWNLOADER_MIDDLEWARES用於配置scrapy的中間件.我們可以在這里進行自己爬蟲中間鍵的配置,配置后如下:

DOWNLOADER_MIDDLEWARES = {
    'WandoujiaCrawler.middlewares.ProxyMiddleware': 100,
}

其中WandoujiaCrawler是我們的項目名稱,后面的數字代表中間件執行的優先級,官方文檔中默認proxy中間件的優先級編號是750,我們的中間件優先級要高於默認的proxy中間鍵.中間件middlewares.py的寫法如下(scrapy默認會在這個文件中寫好一個中間件的模板,不用管它寫在后面即可):

# -*- coding: utf-8 -*-
class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = "http://proxy.yourproxy:8001"

這里有兩個問題:
一是proxy一定是要寫號http://前綴的否則會出現to_bytes must receive a unicode, str or bytes object, got NoneType的錯誤.
二是官方文檔中寫到process_request方法一定要返回request對象,response對象或None的一種,但是其實寫的時候不用return,亂寫可能會報錯.
另外如果代理有用戶名密碼等就需要在后面再加上一些內容:

# Use the following lines if your proxy requires authentication
proxy_user_pass = "USERNAME:PASSWORD"
# setup basic authentication for the proxy
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass


二.直接在爬蟲程序中設置proxy字段
我們可以直接在自己具體的爬蟲程序中設置proxy字段,代碼如下,直接在構造Request里面加上meta字段即可:

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse, meta={'proxy': 'http://proxy.yourproxy:8001'})
 
    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('span small::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

----------------------------------------------------------------------------------------------------------------------

第二種

1.在Scrapy工程下新建“middlewares.py”

復制代碼
 1 # Importing base64 library because we'll need it ONLY in case if the proxy we are going to use requires authentication
 2 import base64
 3  
 4 # Start your middleware class
 5 class ProxyMiddleware(object):
 6     # overwrite process request
 7     def process_request(self, request, spider):
 8         # Set the location of the proxy
 9         request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
10  
11         # Use the following lines if your proxy requires authentication
12         proxy_user_pass = "USERNAME:PASSWORD"
13         # setup basic authentication for the proxy
14         encoded_user_pass = base64.encodestring(proxy_user_pass)
15         request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
16 
17 
18 
復制代碼

2.在項目配置文件里(./project_name/settings.py)添加

1 DOWNLOADER_MIDDLEWARES = {
2     'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
3     'project_name.middlewares.ProxyMiddleware': 100,
4 }

只要兩步,現在請求就是通過代理的了。測試一下^_^

復制代碼
 1 from scrapy.spider import BaseSpider
 2 from scrapy.contrib.spiders import CrawlSpider, Rule
 3 from scrapy.http import Request
 4  
 5 class TestSpider(CrawlSpider):
 6     name = "test"
 7     domain_name = "whatismyip.com"
 8     # The following url is subject to change, you can get the last updated one from here :
 9     # http://www.whatismyip.com/faq/automation.asp
10     start_urls = ["http://xujian.info"]
11  
12     def parse(self, response):
13         open('test.html', 'wb').write(response.body)
14
復制代碼

 

增加文件middlewares.py放置在setting.py平行的目錄下

復制代碼
 1 import base64
 2 class ProxyMiddleware(object):
 3 # overwrite process request
 4 def process_request(self, request, spider):
 5     # Set the location of the proxy
 6     request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
 7 
 8     # Use the following lines if your proxy requires authentication
 9     proxy_user_pass = "USERNAME:PASSWORD"
10     # setup basic authentication for the proxy
11     encoded_user_pass = base64.b64encode(proxy_user_pass)
12     request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
復制代碼

很多網上的答案使用base64.encodestring來編碼proxy_user_pass,有一種情況,當username太長的時候,會出現錯誤,所以推薦使用b64encode編碼方式

然后在setting.py中,在DOWNLOADER_MIDDLEWARES中把它打開,projectname.middlewares.ProxyMiddleware: 1就可以了

 【設置headers和cookies】

scrapy中有三種方式設置headers,cookies

setting中設置cookie
middlewares中設置cookie
sipder文件中重寫start_requests方法
這里記錄第三種,重寫start_requests方法,這里以豆瓣網為例

一、設置請求頭headers
在start_request中新增

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
    }​
二、設置cookies
1、登錄豆瓣網並獲取cookie

首先在chrome中登錄豆瓣網
按F12調出開發者工具
查看cookie


2、在start_requests中新增

cookies = {
    'key':'value',
    'key':'value',
    'key':'value'
    }

一行代碼把字符串的cookies轉字典​

#bakooie裝換成字典模式方便scrapy使用
cookid = "_ga=GA1.2.1937936278.1538889470; __gads=ID=1ba11c2610acf504:T=1539160131:S=ALNI_MZwhotFaAA6KsIVzHG-ev0RnU4OIQ; .CNBlogsCookie=7F3B19F5204038FAE6287F33828591011A60086D0F4349BEDA5F568571875F43E1EED5EDE24E458FAB8972604B4ECD19FC058F5562321A6D87ABF8AAC19F32EC6C004B2EBA69A29B8532E5464ECD145896AA49F1; .Cnblogs.AspNetCore.Cookies=CfDJ8J0rgDI0eRtJkfTEZKR_e81dD8ABr7voOOlhOqLJ7tzHG0h7wCeF8EYzLUZbtYueLnkUIzSWDE9LuJ-53nA6Lem4htKEIqdoOszI5xWb4PUZHJtM1qgEjI1E1Q8YLz8cU3jts5xoHMzq7qq7AmtrlCYYqvBMgEX8GACn8j61WrxZfKe9Hmh4akC9AxcODmAPP--axDI0w6LTSQYKl4GnKihmxM6DQ3RDCXXzWukG-3xiPfKv5vdSNFBTvj7b2qOeTmy45RWkQT9dqf_bXjniWnhPHRnGq8uNHqN2bpzUlCOxsrjwuZlhbAPPLCnX90XJaA; _gid=GA1.2.201165281.1540104585"
coolies = dict(i.split('=',1) for i in cookid.split(';'))
print(coolies)



3、修改方法返回值

yield scrapy.Request(url=url, headers=headers, cookies=cookies, callback=self.parse)​​
4、修改COOKIES_ENABLED

當COOKIES_ENABLED是注釋的時候scrapy默認沒有開啟cookie
當COOKIES_ENABLED沒有注釋設置為False的時候scrapy默認使用了settings里面的cookie
當COOKIES_ENABLED設置為True的時候scrapy就會把settings的cookie關掉,使用自定義cookie
因此在settings.py文件中設置COOKIES_ENABLED = True​

5、修改COOKIES_ENABLED

在settings.py文件中設置ROBOTSTXT_OBEY = False​

三、測試
1、新建scrapy爬蟲項目

scrapy stratproject douban

2、在./doouban/spiders/下新建short_spider.py文件

# -*- coding: utf-8 -*-
import scrapy
 
class ShortSpider(scrapy.Spider):
    name = 'short'
    allow_domains = ['movie.douban.com']
 
    # 重寫start_requests方法
    def start_requests(self):
 
        # 瀏覽器用戶代理
        headers = {
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
        }
        # 指定cookies
        cookies = {
            'key':'value',
            'key':'value',
            'key':'value'
        }
        urls = [
            'https://movie.douban.com/subject/26266893/comments?start=250&limit=20&sort=new_score&status=P'
        ]
        for url in urls:
            yield scrapy.Request(url=url, headers=headers, cookies=cookies, callback=self.parse)
 
    def parse(self, response):
        file_name = 'data.html'
        with open(file_name, 'wb') as f:
            f.write(response.body)
 因為豆瓣電影短評論對未登錄用戶有限制,因此這里使用豆瓣電影段評論數比較靠后的頁數進行測試

3、進入douban目錄執行scrapy crawl short

狀態碼200,爬取成功(無權限返回403)





 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM