說明:
本文參照了官網文檔,以及stackoverflow的幾個問題
概要:
在scrapy中使用代理,有兩種使用方式
- 使用中間件
- 直接設置Request類的meta參數
方式一:使用中間件
要進行下面兩步操作
- 在文件 settings.py 中激活代理中間件
ProxyMiddleware
- 在文件 middlewares.py 中實現類
ProxyMiddleware
1.文件 settings.py 中:
# settings.py
DOWNLOADER_MIDDLEWARES = {
'project_name.middlewares.ProxyMiddleware': 100, # 注意修改 project_name
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
}
說明:
數字100, 110表示中間件先被調用的次序。數字越小,越先被調用。
官網文檔:
The integer values you assign to classes in this setting determine the order in which they run: items go through from lower valued to higher valued classes. It’s customary to define these numbers in the 0-1000 range.
2.文件 middlewares.py 看起來像這樣:
代理不斷變換
- 這里利用網上API 直接get過來。(需要一個APIKEY,免費注冊一個賬號就有了。這個APIKEY是我自己的,不保證一直有效!)
- 也可以從網上現抓。
- 還可以從本地文件讀取
# middlewares.py
import requests
class ProxyMiddleware(object):
def process_request(self, request, spider):
APIKEY = 'f95f08afc952c034cc2ff9c5548d51be'
url = 'https://www.proxicity.io/api/v1/{}/proxy'.format(APIKEY) # 在線API接口
r = requests.get(url)
request.meta['proxy'] = r.json()['curl'] # 協議://IP地址:端口(如 http://5.39.85.100:30059)
return request
方式二:直接設置Request類的meta參數
import random
# 事先准備的代理池
proxy_pool = ['http://proxy_ip1:port', 'http://proxy_ip2:port', ..., 'http://proxy_ipn:port']
class MySpider(BaseSpider):
name = "my_spider"
allowed_domains = ["example.com"]
start_urls = [
'http://www.example.com/articals/',
]
def start_requests(self):
for url in self.start_urls:
proxy_addr = random.choice(proxy_pool) # 隨機選一個
yield scrapy.Request(url, callback=self.parse, meta={'proxy': proxy_addr}) # 通過meta參數添加代理
def parse(self, response):
# doing parse
延伸閱讀
1.閱讀官網文檔對Request類的描述,我們可以發現除了設置proxy,還可以設置method, headers, cookies, encoding等等:
class scrapy.http.Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback])
2.官網文檔對Request.meta參數可以設置的詳細列表:
- dont_redirect
- dont_retry
- handle_httpstatus_list
- handle_httpstatus_all
- dont_merge_cookies (see cookies parameter of Request constructor)
- cookiejar
- dont_cache
- redirect_urls
- bindaddress
- dont_obey_robotstxt
- download_timeout
- download_maxsize
- proxy
如隨機設置請求頭和代理:
# my_spider.py
import random
# 事先收集准備的代理池
proxy_pool = [
'http://proxy_ip1:port',
'http://proxy_ip2:port',
...,
'http://proxy_ipn:port'
]
# 事先收集准備的 headers
headers_pool = [
{'User-Agent': 'Mozzila 1.0'},
{'User-Agent': 'Mozzila 2.0'},
{'User-Agent': 'Mozzila 3.0'},
{'User-Agent': 'Mozzila 4.0'},
{'User-Agent': 'Chrome 1.0'},
{'User-Agent': 'Chrome 2.0'},
{'User-Agent': 'Chrome 3.0'},
{'User-Agent': 'Chrome 4.0'},
{'User-Agent': 'IE 1.0'},
{'User-Agent': 'IE 2.0'},
{'User-Agent': 'IE 3.0'},
{'User-Agent': 'IE 4.0'},
]
class MySpider(BaseSpider):
name = "my_spider"
allowed_domains = ["example.com"]
start_urls = [
'http://www.example.com/articals/',
]
def start_requests(self):
for url in self.start_urls:
headers = random.choice(headers_pool) # 隨機選一個headers
proxy_addr = random.choice(proxy_pool) # 隨機選一個代理
yield scrapy.Request(url, callback=self.parse, headers=headers, meta={'proxy': proxy_addr})
def parse(self, response):
# doing parse