pypepeteer的使用代替selenium(防止反爬)


# pypepeteer同样可以操作浏览器,和selenium具有同样的功能,但是很多反爬虫网站能检测到selenium,所以同样拿不到数据,那我们就得pypepeteer

以下是官方说明:

  

Installation

Pyppeteer requires python 3.6+. (experimentally supports python 3.5)

Install by pip from PyPI:

python3 -m pip install pyppeteer

Or install latest version from github:

python3 -m pip install -U git+https://github.com/miyakogi/pyppeteer.git@dev

Usage

Note: When you run pyppeteer first time, it downloads a recent version of Chromium (~100MB). If you don't prefer this behavior, run pyppeteer-install command before running scripts which uses pyppeteer.

Example: open web page and take a screenshot.

import asyncio
from pyppeteer import launch async def main(): browser = await launch() page = await browser.newPage() await page.goto('http://example.com') await page.screenshot({'path': 'example.png'}) await browser.close() asyncio.get_event_loop().run_until_complete(main())

Example: evaluate script on the page.

import asyncio
from pyppeteer import launch async def main(): browser = await launch() page = await browser.newPage() await page.goto('http://example.com') await page.screenshot({'path': 'example.png'}) dimensions = await page.evaluate('''() => {  return {  width: document.documentElement.clientWidth,  height: document.documentElement.clientHeight,  deviceScaleFactor: window.devicePixelRatio,  }  }''') print(dimensions) # >>> {'width': 800, 'height': 600, 'deviceScaleFactor': 1} await browser.close() asyncio.get_event_loop().run_until_complete(main())

Pyppeteer has almost same API as puppeteer. More APIs are listed in the document.

Puppeteer's document and troubleshooting are also useful for pyppeteer users.

Differences between puppeteer and pyppeteer

Pyppeteer is to be as similar as puppeteer, but some differences between python and JavaScript make it difficult.

These are differences between puppeteer and pyppeteer.

Keyword arguments for options

Puppeteer uses object (dictionary in python) for passing options to functions/methods. Pyppeteer accepts both dictionary and keyword arguments for options.

Dictionary style option (similar to puppeteer):

browser = await launch({'headless': True})

Keyword argument style option (more pythonic, isn't it?):

browser = await launch(headless=True)


实际演练:
  
import asyncio import pyppeteer import os os.environ['PYPPETEER_CHROMIUM_REVISION'] ='588429' pyppeteer.DEBUG = True async def main(): print("in main ") print(os.environ.get('PYPPETEER_CHROMIUM_REVISION')) browser = await pyppeteer.launch() page = await browser.newPage() await page.goto('http://www.baidu.com') content = await page.content() cookies = await page.cookies() # await page.screenshot({'path': 'example.png'}) await browser.close() return {'content':content, 'cookies':cookies} loop = asyncio.get_event_loop() task = asyncio.ensure_future(main()) loop.run_until_complete(task) print(task.result()) 

与scrapy的整合

加入downloadmiddleware

from scrapy import signals from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware import random import pyppeteer import asyncio import os from scrapy.http import HtmlResponse pyppeteer.DEBUG = False class FundscrapyDownloaderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. def __init__(self) : print("Init downloaderMiddleware use pypputeer.") os.environ['PYPPETEER_CHROMIUM_REVISION'] ='588429' # pyppeteer.DEBUG = False print(os.environ.get('PYPPETEER_CHROMIUM_REVISION')) loop = asyncio.get_event_loop() task = asyncio.ensure_future(self.getbrowser()) loop.run_until_complete(task) #self.browser = task.result() print(self.browser) print(self.page) # self.page = await browser.newPage() async def getbrowser(self): self.browser = await pyppeteer.launch() self.page = await self.browser.newPage() # return await pyppeteer.launch() async def getnewpage(self): return await self.browser.newPage()  @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called loop = asyncio.get_event_loop() task = asyncio.ensure_future(self.usePypuppeteer(request)) loop.run_until_complete(task) # return task.result() return HtmlResponse(url=request.url, body=task.result(), encoding="utf-8",request=request) async def usePypuppeteer(self, request): print(request.url) # page = await self.browser.newPage() await self.page.goto(request.url) content = await self.page.content() return content def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name)

作者:金刚_30bf
链接:https://www.jianshu.com/p/fd9eb385a70e
来源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM