python之crawlspider初探


注意點:

"""
        1、用命令創建一個crawlspider的模板:scrapy genspider -t crawl <爬蟲名> <all_domain>,也可以手動創建
        2、CrawlSpider中不能再有以parse為名字的數據提取方法,這個方法被CrawlSpider用來實現基礎url提取等功能
        3、一個Rule對象接受很多參數,首先第一個是包含url規則的LinkExtractor對象,
        常有的還有callback(制定滿足規則的解析函數的字符串)和follow(response中提取的鏈接是否需要跟進)
        4、不指定callback函數的請求下,如果follow為True,滿足rule的url還會繼續被請求
        5、如果多個Rule都滿足某一個url,會從rules中選擇第一個滿足的進行操作
    """

1、創建工程

scrapy startproject zjh

2、創建項目

scrapy genspider -t crawl circ bxjg.circ.gov.cn
與scrapy不同的是添加了-t crawl參數

3、settings文件添加日志級別,USER_AGENT

# -*- coding: utf-8 -*-

# Scrapy settings for zjh project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'zjh'

SPIDER_MODULES = ['zjh.spiders']
NEWSPIDER_MODULE = 'zjh.spiders'

LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zjh (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'zjh.middlewares.ZjhSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'zjh.middlewares.ZjhDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'zjh.pipelines.ZjhPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
View Code

4、circ.py文件提取數據

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

import re
class CircSpider(CrawlSpider):
    name = 'circ'
    allowed_domains = ['bxjg.circ.gov.cn']
    start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm']

    #定義提取url地址規則
    rules = (
        #一個Rule一條規則,LinkExtractor表示鏈接提取器,提取url地址
        #allow,提取的url,url不完整,但是crawlspider會幫我們補全,然后再請求
        #callback 提取出來的url地址的response會交給callback處理
        #follow 當前url地址的響應是否重新將過rules來提取url地址
        Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item'), #詳情頁數據,不需要follow
        Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+\.htm'),follow=True),  # 下一頁,不需要callback處理,但是需要follow不斷循環翻頁

    )
    #parse函數有特殊功能,不能定義
    def parse_item(self, response):
        item = {}
        item["title"]= re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->",response.body.decode())[0]
        item["publish_date"] =re.findall("發布時間:20\d{2}-\d{2}-\d{2}",response.body.decode())[0]
        print(item)
        #也可以使用Request()自動構造請求
        # yield scrapy.Request(
        #     url,
        #     callback=parse_detail
        #     meta={"item":item}
        # )
    def parse_detail(self,response):
        pass

5、擴展知識

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM