前面用scrapy編寫爬蟲抓取了自己博客的內容並保存成json格式的數據(scrapy爬蟲成長日記之創建工程-抽取數據-保存為json格式的數據)和寫入數據庫(scrapy爬蟲成長日記之將抓取內容寫入mysql數據庫)。然而,這個爬蟲的功能還是過於弱小,一旦目標網站設置了爬蟲的限制,我們的爬蟲也就失效了。因此這里重點講述一下如何避免scrapy爬蟲被ban。本門的所有內容都是基於前面兩篇文章的基礎上完成的,如果您錯過了可以點擊此回看:scrapy爬蟲成長日記之創建工程-抽取數據-保存為json格式的數據,scrapy爬蟲成長日記之將抓取內容寫入mysql數據庫
根據scrapy官方文檔:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban,主要有以下幾個策略。
- 動態設置user agent
- 禁用cookies
- 設置延遲下載
- 使用Google cache
- 使用IP地址池(Tor project、VPN和代理IP)
- 使用Crawlera
由於Google cache受國內網絡的影響,你懂得;Crawlera的分布式下載,我們可以在下次用一篇專門的文章進行講解。所以本文主要從動態隨機設置user agent、禁用cookies、設置延遲下載和使用代理IP這幾個方式。好了,入正題:
1、創建middlewares.py
scrapy代理IP、user agent的切換都是通過DOWNLOADER_MIDDLEWARES進行控制,下面我們創建middlewares.py文件。
[root@bogon cnblogs]# vi cnblogs/middlewares.py import random import base64 from settings import PROXIES class RandomUserAgent(object): """Randomly rotate user agents based on a list of predefined ones""" def __init__(self, agents): self.agents = agents @classmethod def from_crawler(cls, crawler): return cls(crawler.settings.getlist('USER_AGENTS')) def process_request(self, request, spider): #print "**************************" + random.choice(self.agents) request.headers.setdefault('User-Agent', random.choice(self.agents)) class ProxyMiddleware(object): def process_request(self, request, spider): proxy = random.choice(PROXIES) if proxy['user_pass'] is not None: request.meta['proxy'] = "http://%s" % proxy['ip_port'] encoded_user_pass = base64.encodestring(proxy['user_pass']) request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass print "**************ProxyMiddleware have pass************" + proxy['ip_port'] else: print "**************ProxyMiddleware no pass************" + proxy['ip_port'] request.meta['proxy'] = "http://%s" % proxy['ip_port']
類RandomUserAgent主要用來動態獲取user agent,user agent列表USER_AGENTS在settings.py中進行配置。
類ProxyMiddleware用來切換代理,proxy列表PROXIES也是在settings.py中進行配置。
2、修改settings.py配置USER_AGENTS和PROXIES
a):添加USER_AGENTS
USER_AGENTS = [ "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)", "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", ]
b):添加代理IP設置PROXIES
PROXIES = [ {'ip_port': '111.11.228.75:80', 'user_pass': ''}, {'ip_port': '120.198.243.22:80', 'user_pass': ''}, {'ip_port': '111.8.60.9:8123', 'user_pass': ''}, {'ip_port': '101.71.27.120:80', 'user_pass': ''}, {'ip_port': '122.96.59.104:80', 'user_pass': ''}, {'ip_port': '122.224.249.122:8088', 'user_pass': ''}, ]
代理IP可以網上搜索一下,上面的代理IP獲取自:http://www.xici.net.co/。
c):禁用cookies
COOKIES_ENABLED=False
d):設置下載延遲
DOWNLOAD_DELAY=3
e):最后設置DOWNLOADER_MIDDLEWARES
DOWNLOADER_MIDDLEWARES = { # 'cnblogs.middlewares.MyCustomDownloaderMiddleware': 543, 'cnblogs.middlewares.RandomUserAgent': 1, 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, #'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110, 'cnblogs.middlewares.ProxyMiddleware': 100, }
保存settings.py
3、測試
[root@bogon cnblogs]# scrapy crawl CnblogsSpider
源碼更新至此:https://github.com/jackgitgz/CnblogsSpider
篇外話:本文的user agent和proxy列表都是采用settings.py的方式進行設置的,實際生產中user agent和proxy有可能會經常更新,每次更改配置文件顯得很笨拙也不便於管理。因而,可以根據需要保存在mysql數據庫。