爬蟲基礎15(基於Scrapy-redis去重)


基於Scrapy-redis去重

1、安裝scrapy-redis

pip3 install scrapy-redis

2、完全自定義redis去重原理

import redis
from scrapy.dupefilter import BaseDupeFilter
# 類似MD5值的一個數【如果url是一樣的那么這個類似md5值的數也是一樣的】
from scrapy.utils.request import request_fingerprint


class DupFilter(BaseDupeFilter):
    def __init__(self):
        self.conn = redis.Redis(host='127.0.0.1',port=6379)

    def request_seen(self, request):
        """
        檢測當前請求是否已經被訪問過
        :param request: 
        :return: True表示已經訪問過;False表示未訪問過
        """
        fid = request_fingerprint(request)
        # redis集合
        result = self.conn.sadd('visited_urls', fid)
        if result == 1:
            return False
        return True
translate.py【去重】
# 修改默認配置
#DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'

DUPEFILTER_CLASS = 'xxd.translate.DupFilter'

 3、使用scrapy-redis自帶的去重規則

from scrapy_redis.dupefilter import RFPDupeFilter
from scrapy_redis.connection import get_redis_from_settings
from scrapy_redis import defaults

class RedisDupeFilter(RFPDupeFilter):
    @classmethod
    def from_settings(cls, settings):
        """Returns an instance from given settings.

        This uses by default the key ``dupefilter:<timestamp>``. When using the
        ``scrapy_redis.scheduler.Scheduler`` class, this method is not used as
        it needs to pass the spider name in the key.

        Parameters
        ----------
        settings : scrapy.settings.Settings

        Returns
        -------
        RFPDupeFilter
            A RFPDupeFilter instance.


        """
        server = get_redis_from_settings(settings)
        # XXX: This creates one-time key. needed to support to use this
        # class as standalone dupefilter with scrapy's default scheduler
        # if scrapy passes spider on open() method this wouldn't be needed
        # TODO: Use SCRAPY_JOB env as default and fallback to timestamp.
        key = defaults.DUPEFILTER_KEY % {'timestamp': 'xiaodongbei'}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(server, key=key, debug=debug)
duplicate_removal.py
# ############### scrapy redis連接 ####################

REDIS_HOST = '140.143.227.206'                      # 主機名
REDIS_PORT = 8888                                   # 端口
REDIS_PARAMS  = {'password':'beta'}                 # Redis連接參數             默認:REDIS_PARAMS = {'socket_timeout': 30,'socket_connect_timeout': 30,'retry_on_timeout': True,'encoding': REDIS_ENCODING,})
REDIS_ENCODING = "utf-8"                            # redis編碼類型             默認:'utf-8'

# REDIS_URL = 'redis://user:pass@hostname:9001'     # 連接URL(優先於以上配置)
DUPEFILTER_KEY = 'dupefilter:%(timestamp)s'

# DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'
DUPEFILTER_CLASS = 'dbd.duplicate_removal.RedisDupeFilter'
配置

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM