Scrapy實戰:爬取http://quotes.toscrape.com網站數據


需要學習的地方:

1.Scrapy框架流程梳理,各文件的用途等

2.在Scrapy框架中使用MongoDB數據庫存儲數據

3.提取下一頁鏈接,回調自身函數再次獲取數據

 

重點:從當前頁獲取下一頁的鏈接,傳給函數自身繼續發起請求

        next = response.css('.pager .next a::attr(href)').extract_first()  # 獲取下一頁的相對鏈接
        url = response.urljoin(next)  # 生成完整的下一頁鏈接
        yield scrapy.Request(url=url, callback=self.parse)  # 把下一頁的鏈接回調給自身再次請求

 

站點:http://quotes.toscrape.com

該站點網頁結構比較簡單,需要的數據都在div標簽中

 

操作步驟:

1.創建項目

# scrapy startproject quotetutorial

此時目錄結構如下:

2.生成爬蟲文件

# cd quotetutorial
# scrapy genspider quotes quotes.toscrape.com # 若是有多個爬蟲多次操作該命令即可

3.編輯items.py文件,獲取需要輸出的數據

import scrapy class QuoteItem(scrapy.Item): # define the fields for your item here like:
    # name = scrapy.Field()
    text = scrapy.Field() author = scrapy.Field() tags = scrapy.Field()

 

 4.編輯quotes.py文件,爬取網站數據

# -*- coding: utf-8 -*-
import scrapy from quotetutorial.items import QuoteItem class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = ['http://quotes.toscrape.com/'] def parse(self, response): # print(response.status) # 200
        quotes = response.css('.quote') for quote in quotes: item = QuoteItem() text = quote.css('.text::text').extract_first() author = quote.css('.author::text').extract_first() tags = quote.css('.tags .tag::text').extract() item['text'] = text item['author'] = author item['tags'] = tags yield item next = response.css('.pager .next a::attr(href)').extract_first()  # 獲取下一頁的相對鏈接
        url = response.urljoin(next)  # 生成完整的下一頁鏈接
        yield scrapy.Request(url=url, callback=self.parse)  # 把下一頁的鏈接回調給自身再次請求

 

5.編寫pipelines.py文件,進一步處理item數據,保存到mongodb數據庫

# -*- coding: utf-8 -*-

# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

# 使用的話需要在settings文件中設置


import pymongo as pymongo from scrapy.exceptions import DropItem class TextPipeline(object): """對輸出的item進行進一步的處理"""

    def __init__(self): self.limit = 50

    def process_item(self, item, spider): if item['text']: if len(item['text']) > self.limit: item['text'] = item['text'][0:self.limit].rstrip() + '......'
            return item else: return DropItem('Missing Text!') class MongoPipeline(object): """把輸出的item保存到MongoDB數據庫"""

    def __init__(self, mongo_url, mongo_db): self.mongo_uri = mongo_url self.mongo_db = mongo_db @classmethod def from_crawler(cls, crawler): """從settings文件獲取配置信息"""
        return cls( mongo_url=crawler.settings.get('MONGO_URI'), mongo_db=crawler.settings.get('MONGO_DB') ) def open_spider(self, spider): """初始化mongodb""" self.client = pymongo.MongoClient(self.mongo_uri) self.db = self.client[self.mongo_db]  # 為啥用[],而不是()

    def process_item(self, item, spider): name = item.__class__.__name__  # 獲取item的名稱用作表名,也就是QuoteItem
        self.db[name].insert(dict(item))  # 為啥要用dict(item)
        return item def close_spider(self, spider): self.client.close()

 

6.編輯配置文件,增加mongodb數據庫參數,以及使用的pipeline管道參數

ITEM_PIPELINES = { # 'quotetutorial.pipelines.TextPipeline': 300,
   'quotetutorial.pipelines.MongoPipeline': 400, } MONGO_URI = 'localhost' MONGO_DB = 'quotestutorial'

 

 7.執行程序

# scrapy crawl quotes

 8.保存到文件

# scrapy crawl quotes -o quotes.json # 保存成json文件
# scrapy crawl quotes -o quotes.csv # 保存成csv文件
# scrapy crawl quotes -o quotes.xml # 保存成xml文件
# scrapy crawl quotes -o quotes.jl # 保存成jl文件
# scrapy crawl quotes -o quotes.pickle # 保存成pickle文件
# scrapy crawl quotes -o quotes.marshal # 保存成marshal文件
# scrapy crawl quotes -o ftp://user:password@ftp.example.com/path/quotes.csv # 生成csv文件保存到遠程FTP上

 

效果:

 

源碼下載地址:https://files.cnblogs.com/files/sanduzxcvbnm/quotetutorial.7z


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM