callback執行異常處理
如果Request請求成功后,在解析文本時異常,如下所示:
def parse_details(self, response):
...
item['metres'] = round(float(
response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
'萬公里')) * 10000000)
...
yield item
response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
AttributeError: 'NoneType' object has no attribute 'rstrip'
如果是代碼錯誤或者頁面改版,重新適配即可,但如果是由於限流規則導致被轉發到限流頁面,就需要捕獲異常進行補救,求解之路如下:
1、DOWNLOADER_MIDDLEWARES中process_exception
本意是請求失敗后更換代理,但是未生效,因為process_exception處理的是Request異常,例如:請求超時、請求拒絕、請求未響應等,但上述錯誤是請求成功后解析造成的,理解錯誤,陷入誤區
2、自行捕獲異常,更換代理重試
try:
...
item['metres'] = round(float(
response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
'萬公里')) * 10000000)
...
except Exception as reason:
retry_times = response.meta.get('retry_times', 0)
if retry_times < 3:
yield scrapy.Request(url=xxx, meta={'url': xxx, 'is_new_proxy': True, 'retry_times': retry_times + 1}, callback=self.parse, dont_filter=True)
需要再meta中設置以下屬性:
- url:限流后請求有可能被重置,response.request.url可能變為重置后的地址
- is_new_proxy: 聲明需要新的代理,在DOWNLOADER_MIDDLEWARES的process_request中作為獲取代理的入參
- retry_times:避免無限重試
注意:需要設置dont_filter=True,避免重復url被過濾掉
3、使用SPIDER_MIDDLEWARES中process_spider_exception
process_spider_exception(self, response, exception, spider)會捕獲callback中拋出的異常,可以在這里添加異常處理策略,例如:郵件報警、短信提示等,可以與自行捕獲異常配合使用
scrapy.Request不生效
- scrapy.Request時未設置dont_filter=True,重復url會被自動過濾(特別需要注意,特別是在異常捕獲或者SPIDER_MIDDLEWARES中返回request進行重試時)
- url不在allowed_domains中
反爬取應對策略
- scrapy參數調整,兩個方向:限制並發數、模擬停頓
- 代理IP和User-Agent,DOWNLOADER_MIDDLEWARES中設置,如下:
def __init__(self, delay, user_agent_list):
self.delay = delay
self.user_agent_list = user_agent_list
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
# RANDOM_DELAY、USER_AGENT_LIST為spider的custom_settings中配置項
delay = crawler.spider.settings.get("RANDOM_DELAY", 0)
user_agent_list = crawler.spider.settings.get("USER_AGENT_LIST", [])
if not isinstance(delay, int):
raise ValueError("RANDOM_DELAY need a int")
# 需要使用代理先初始化代理池
if crawler.spider.name in cls.SPIDERS_USE_PROXY:
init_proxy("init_proxy")
s = cls(delay, user_agent_list)
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
# 設置隨機停頓
if self.delay > 0:
delay = random.randint(0, self.delay)
time.sleep(delay)
# 設置User-Agent
if len(self.user_agent_list) > 0:
request.headers['User-Agent'] = random.choice(self.user_agent_list)
if spider.name not in self.SPIDERS_USE_PROXY:
return None
# 設置代理
try:
request.meta['change_proxy_times'] = request.meta.get('change_proxy_times', 0)
# 構建代理信息
build_one_proxy(request, spider.name)
except ProxyError:
pass
return None
def build_one_proxy(request, app):
# 是否更換新的代理,不是從代理池獲取
is_new_proxy = request.meta.get('is_new_proxy', False)
# 代理更換次數
change_proxy_times = request.meta.get('change_proxy_times', 999)
# 兩次從代理池重新取代理的機會,一次重新獲取新代理的機會
if is_new_proxy or change_proxy_times == 3:
# 獲取新代理並加入代理池
new_proxy = refresh_and_get_one_proxy(app)
proxy_http_list.append(new_proxy)
proxy_http = new_proxy
elif change_proxy_times <= 2:
proxy_http = random.choice(proxy_http_list)
else:
return None
request.meta['proxy'] = proxy_http
request.meta['change_proxy_times'] += 1
自定義Cookie失效
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Cookie': 'antipas=' + str(xxx),
'Host': 'www.guazi.com',
'Referer': 'https://www.xxx.com/',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36'}
yield scrapy.Request(url=xxx, callback=self.parse, headers=self.headers)
實際請求中Cookie並不是我們設置的值,導致請求203,返回的並不是我們想要的內容,導致后續解決錯誤,解決方案:
COOKIES_ENABLED設置為False即可
陷入Gave up retrying死循環
...
2021-04-07 09:37:04 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://bd.xxx.com/xxx> (failed 4 times): Connection was refused by other side: 111: Connection refused
...
根據報錯信息可以看出,同一個鏈接重試超過了3次,被放棄了。配置RETRY_TIMES=3,符合上述最大重試次數,但為什么會無線重試呢?
原因在於,3次重試后仍然Connection refused,被SPIDER_MIDDLEWARES的process_spider_exception捕獲,而捕獲處理邏輯中更換代理返回了新的request,新的請求超過最大重試次數,死循環...
注意:
- 若啟用重試機制,會先自動重試,重試失敗后才會被SPIDER_MIDDLEWARES的process_spider_exception捕獲
- 可以通過RETRY_HTTP_CODES更改需要重試的異常請求
