scrapy 偽裝代理和fake_userAgent的使用
偽裝瀏覽器代理 在爬取網頁是有些服務器對請求過濾的不是很高可以不用ip來偽裝請求直接將自己的瀏覽器信息給偽裝也是可以的。
第一種方法:
1.在setting.py文件中加入以下內容,這是一些瀏覽器的頭信息
USER_AGENT_LIST = ['zspider/0.9-dev http://feedback.redkolibri.com/',
'Xaldon_WebSpider/2.0.b1',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) Speedy Spider (http://www.entireweb.com/about/search_tech/speedy_spider/)',
'Mozilla/5.0 (compatible; Speedy Spider; http://www.entireweb.com/about/search_tech/speedy_spider/)',
'Speedy Spider (Entireweb; Beta/1.3; http://www.entireweb.com/about/search_tech/speedyspider/)',
'Speedy Spider (Entireweb; Beta/1.2; http://www.entireweb.com/about/search_tech/speedyspider/)',
'Speedy Spider (Entireweb; Beta/1.1; http://www.entireweb.com/about/search_tech/speedyspider/)',
'Speedy Spider (Entireweb; Beta/1.0; http://www.entireweb.com/about/search_tech/speedyspider/)',
'Speedy Spider (Beta/1.0; www.entireweb.com)',
'Speedy Spider (http://www.entireweb.com/about/search_tech/speedy_spider/)',
'Speedy Spider (http://www.entireweb.com/about/search_tech/speedyspider/)',
'Speedy Spider (http://www.entireweb.com)',
'Sosospider+(+http://help.soso.com/webspider.htm)',
'sogou spider',
'Nusearch Spider (www.nusearch.com)',
'nuSearch Spider (compatible; MSIE 4.01; Windows NT)',
'lmspider (lmspider@scansoft.com)',
'lmspider lmspider@scansoft.com',
'ldspider (http://code.google.com/p/ldspider/wiki/Robots)',
'iaskspider/2.0(+http://iask.com/help/help_index.html)',
'iaskspider',
'hl_ftien_spider_v1.1',
'hl_ftien_spider',
'FyberSpider (+http://www.fybersearch.com/fyberspider.php)',
'FyberSpider',
'everyfeed-spider/2.0 (http://www.everyfeed.com)',
'envolk[ITS]spider/1.6 (+http://www.envolk.com/envolkspider.html)',
'envolk[ITS]spider/1.6 ( http://www.envolk.com/envolkspider.html)',
'Baiduspider+(+http://www.baidu.com/search/spider_jp.html)',
'Baiduspider+(+http://www.baidu.com/search/spider.htm)',
'BaiDuSpider',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) AddSugarSpiderBot www.idealobserver.com',
]
2.在spider同級目錄下建立一個MidWare文件價里面寫一個HeaderMidWare.py文件 內容為
# encoding: utf-8
from scrapy.utils.project import get_project_settings
import random
settings = get_project_settings()
class ProcessHeaderMidware():
"""process request add request info"""
def process_request(self, request, spider):
"""
隨機從列表中獲得header, 並傳給user_agent進行使用
"""
ua = random.choice(settings.get('USER_AGENT_LIST'))
spider.logger.info(msg='now entring download midware')
if ua:
request.headers['User-Agent'] = ua
# Add desired logging message here.
spider.logger.info(u'User-Agent is : {} {}'.format(request.headers.get('User-Agent'), request))
pass
3.在setting.py文件中添加
DOWNLOADER_MIDDLEWARES = {
'projectName.MidWare.HeaderMidWare.ProcessHeaderMidware': 543,
}
第二種方法:fake_userAgent的使用
fake_userAgent是github上的開源項目
1.安裝fake_userAgent
pip install fake-useragent
2.在spider同級目錄下建立一個MidWare文件價里面寫一個user_agent_middlewares.py文件內容為
# -*- coding: utf-8 -*-
from fake_useragent import UserAgent
class RandomUserAgentMiddlware(object):
#隨機跟換user-agent
def __init__(self,crawler):
super(RandomUserAgentMiddlware,self).__init__()
self.ua = UserAgent()
self.ua_type = crawler.settings.get('RANDOM_UA_TYPE','random')#從setting文件中讀取RANDOM_UA_TYPE值
@classmethod
def from_crawler(cls,crawler):
return cls(crawler)
def process_request(self,request,spider): ###系統電泳函數
def get_ua():
return getattr(self.ua,self.ua_type)
# user_agent_random=get_ua()
request.headers.setdefault('User_Agent',get_ua())
pass
3.在setting.py中添加
RANDOM_UA_TYPE = 'random'##random chrome
DOWNLOADER_MIDDLEWARES = {
'projectName.MidWare.user_agent_middlewares.RandomUserAgentMiddlware': 543,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware':None,
}
fake_userAgent偽裝代理就配置好了,與第一種方法相比不用寫一大串的瀏覽器頭,那些瀏覽器頭會在https://fake-useragent.herokuapp.com/browsers/0.1.7 中得到。
在第一次啟用fake_userAgent的時候會有一些錯,我認為是項目請求網絡時需要緩存一些內容而導致的。