五種ip proxy的設置方法


我們在制作爬蟲爬取想要的資料時,由於是計算機自動抓取,強度大、速度快,通常會給網站服務器帶來巨大壓力,所以同一個IP反復爬取同一個網頁,就很可能被封,在這里介紹相關的技巧,以免被封;但在制作爬蟲時,還是要適當加入延時代碼,以減少對目標網站的影響。

一、requests設置代理:

import requests

proxies = { "http": "http://192.10.1.10:8080", "https": "http://193.121.1.10:9080", }

requests.get("http://targetwebsite.com", proxies=proxies)

二、Selenium+Chrome設置代理:

from selenium import webdriver

PROXY = "192.206.133.227:8080"

chrome_options = webdriver.ChromeOptions()

chrome_options.add_argument('--proxy-server={0}'.format(PROXY))

browser = webdriver.Chrome(chrome_options=chrome_options)

browser.get('www.targetwebsize.com')

print(browser.page_source)

brsowser.close()

三、Selenium+Phantomjs設置代理:

 

# 利用DesiredCapabilities(代理設置)參數值,重新打開一個sessionId.

proxy=webdriver.Proxy()

proxy.proxy_type=ProxyType.MANUAL

proxy.http_proxy='192.25.171.51:8080'

# 將代理設置添加到webdriver.DesiredCapabilities.PHANTOMJS中

proxy.add_to_capabilities(webdriver.DesiredCapabilities.PHANTOMJS)

browser.start_session(webdriver.DesiredCapabilities.PHANTOMJS)

browser.get('http://www.targetwebsize.com')

print(browser.page_source)

# 還原為系統代理只需將proxy_type重新設置一次

proxy.proxy_type=ProxyType.DIRECT

proxy.add_to_capabilities(webdriver.DesiredCapabilities.PHANTOMJS)

browser.start_session(webdriver.DesiredCapabilities.PHANTOMJS)

 

四、爬蟲框架scrapy設置代理:

在setting.py中添加代理IP

PROXIES = ['http://173.207.95.27:8080',

'http://111.8.100.99:8080',

'http://126.75.99.113:8080',

'http://68.146.165.226:3128']

而后,在middlewares.py文件中,添加下面的代碼。

import scrapy from scrapy

import signals

import random

classProxyMiddleware(object):

''' 設置Proxy '''

def__init__(self, ip):

self.ip = ip

@classmethod

deffrom_crawler(cls, crawler):

return cls(ip=crawler.settings.get('PROXIES'))

defprocess_request(self, request, spider):

ip = random.choice(self.ip)

request.meta['proxy'] = ip

最后將我們自定義的類添加到下載器中間件設置中,如下。

DOWNLOADER_MIDDLEWARES = { 'myproject.middlewares.ProxyMiddleware': 543,}

 

五、Python異步Aiohttp設置代理:

proxy="http://192.121.1.10:9080"

asyncwithaiohttp.ClientSession()assession:

asyncwithsession.get("http://python.org",proxy=proxy)asresp:

print(resp.status)

 

# https方法一:
# connector = SocksConnector.from_url('socks5://localhost:1080', rdns=True)
# async with aiohttp.ClientSession(connector=connector) as sess:
# https方法二:
async with aiohttp.ClientSession() as session:
session.proxies = {'http': 'socks5h://127.0.0.1:1080',
'https': 'socks5h://127.0.0.1:1080'}
headers = {'content-type': 'image/gif',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
}
cookies = {'cookies_are': 'working'}
# proxy = "http://127.0.0.1:1080"
with async_timeout.timeout(10):#設置請求的最長時間為10s
# async with sess.get(url, proxy="http://54.222.232.0:3128") as res:
async with session.get(url,headers=headers,cookies=cookies, verify_ssl=False) as res:
text = await res.text()
print(text)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM