python、scrapy下編寫妹子圖爬蟲程序


 

軟件、框架的安裝主要參考:http://www.jianshu.com/p/a03aab073a35

Scrapy官方文檔:https://docs.scrapy.org/en/latest/intro/install.html(安裝、爬蟲都有參考的)

程序邏輯、流程主要參考http://cuiqingcai.com/4421.html,

 

其他細節百度

 

環境:

macOS10.12.3

Python2.7

Scrapy1.3.3

 

一、軟件(python)、框架(scrapy)的安裝

mac自帶Python,根據Scrapy官網建議,最好下載最新的Python版本安裝

1、安裝pip工具包的支撐環境

各個電腦的情況不一樣,支撐包的情況也不一樣,反正一句話:缺什么就裝什么,我采用Homebrew

2、pip源修改(鏡像地址)

這個一定要做,默認地址訪問速度特別慢,翻牆了都很慢,最后我的總結是:有一半的時間都是因為這個原因導致的

首先創建配置文件,默認情況下Mac端好像是沒有pip的配置文件的,我們需要自行創建。
打開終端,在HOME下創建.pip目錄:
echo $HOME
mkdir .pip
接下來創建配置文件pip.conf:
touch pip.conf
接下來編輯配置文件,隨便使用什么編輯器打開剛剛新建的pip.conf文件,輸入以下兩行:

[global]
index-url = http://pypi.mirrors.ustc.edu.cn/simple
輸入完成后保存退出即可,至此,pip源就修改完了

國內的鏡像較多,參考:http://it.taocms.org/08/8567.htm、http://www.jianshu.com/p/a03aab073a35哪個行就用哪個

3、HOME目錄進行Command Line Tools安裝,終端下執行

xcode-select --install

4、安裝Scrapy,

終端執行pip install Scrapy

如果提示失敗,自行看失敗原因,例如跟six有關,就升級six包:sudo pip install six

通過pip你可以安裝、升級大部分支撐包,簡單一句:缺哪個就安裝哪個包,哪個包版本不對,就升級哪個。這里有Scrapy依賴包的關系和版本:https://pypi.python.org/pypi/Scrapy/1.3.3,鏈接根據需要訪問不同的Scrapy版本頁面

我遇到的另一個問題主要是Scrapy需要支撐包版本問題,和終端賬戶問題。建議多嘗試,采用root賬戶統一安裝

 

二、編寫爬蟲爬取mzitu全站圖片:

Scrapy官方文檔和http://cuiqingcai.com/4421.html寫的很清楚了,我就不班門弄斧,直接貼出代碼:

run.py,運行程序:

from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'mzitu'])

items.py:用來定義Item有哪些屬性

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MzituScrapyItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    name = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()
    image_paths = scrapy.Field()

pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


from scrapy import Request
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
import re


class MzituScrapyPipeline(ImagesPipeline):

    def file_path(self, request, response=None, info=None):
        """
        :param request: 每一個圖片下載管道請求
        :param response: 
        :param info: 
        :param strip :清洗Windows系統的文件夾非法字符,避免無法創建目錄
        :return: 每套圖的分類目錄
        """
        item = request.meta['item']
        FolderName = item['name']
        image_guid = request.url.split('/')[-1]
        filename = u'full/{0}/{1}'.format(FolderName, image_guid)
        return filename
 
    def get_media_requests(self, item, info):
        """
        :param item: spider.py中返回的item
        :param info: 
        :return: 
        """
        for img_url in item['image_urls']:
            yield Request(img_url, meta={'item': item})
 
 
    def item_completed(self, results, item, info):
        image_paths = [x['path'] for ok, x in results if ok]
        if not image_paths:
            raise DropItem("Item contains no images")
        item['image_paths'] = image_paths
        return item

 

 settings.py,項目設置文件:

# -*- coding: utf-8 -*-

# Scrapy settings for mzitu_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'mzitu_scrapy'

SPIDER_MODULES = ['mzitu_scrapy.spiders']
NEWSPIDER_MODULE = 'mzitu_scrapy.spiders'


ITEM_PIPELINES = {'mzitu_scrapy.pipelines.MzituScrapyPipeline': 300}
IMAGES_STORE = '/tmp/images'
IMAGES_EXPIRES = 1


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'mzitu_scrapy (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'mzitu_scrapy.middlewares.MzituScrapySpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'mzitu_scrapy.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'mzitu_scrapy.pipelines.MzituScrapyPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

spider.py,主程序:

# -*- coding: UTF-8 -*- 

from scrapy import Request
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from mzitu_scrapy.items import MzituScrapyItem


class Spider(CrawlSpider):
    name = 'mzitu'
    allowed_domains = ['mzitu.com']
    start_urls = ['http://www.mzitu.com/']
    my_img_urls = []
    rules = (
        Rule(LinkExtractor(allow=('http://www.mzitu.com/\d{1,6}',), deny=('http://www.mzitu.com/\d{1,6}/\d{1,6}')), callback='parse_item', follow=True),
    )


    def parse_item(self, response):
        """
        :param response: 下載器返回的response
        :return:
        """
        item = MzituScrapyItem()
        # max_num為頁面最后一張圖片的位置
        max_num = response.xpath("descendant::div[@class='main']/div[@class='content']/div[@class='pagenavi']/a[last()-1]/span/text()").extract_first(default="N/A")
        for num in range(1, int(max_num)):
            # page_url 為每張圖片所在的頁面地址
            page_url = response.url + '/' + str(num)
            yield Request(page_url, callback=self.img_url)        


    def img_url(self, response):
        """取出圖片URL 並添加進self.img_urls列表中
        :param response:
        :param img_url 為每張圖片的真實地址
        """
        item = MzituScrapyItem()
        name = response.xpath("descendant::div[@class='main-image']/descendant::img/@alt").extract_first(default="N/A")
        img_urls = response.xpath("descendant::div[@class='main-image']/descendant::img/@src").extract()
        item['image_urls'] = img_urls
        item['name']  = name
        return item

項目結構如下:

最后終端進入項目目錄,執行:

python run.py crawl mzitu_scrapy

 

如果遇到問題多找百度答案,坑還是不少的。

 

我這里提幾點:

1、Scrapy是默認給文件按照hash來命名的,想要文件原來的名字就覆寫file_path方法,網絡上寫的覆寫方法完全沒有問題

2、如果想給文件歸類放不同的文件夾怎么辦?所謂文件夾不過是路徑而已,一樣是修改file_path,只不過要給n個文件寫入同一個文件夾,我的思路就是一個item[name]對應n個文件,該name就是文件夾名稱。注意http://cuiqingcai.com/4421.html的主程序是錯的,文件夾名稱和文件根本對不上,有嚴重的邏輯問題。為了這個問題我找了好久答案,一直都以為是覆寫file_path方法的思路是錯的,最后才發現原來主程序中name和文件對不上導致的

3、xpath語法可以上http://www.w3school.com.cn/xpath/index.asp現學,不難,可以結合chrome的一個xpath插件(xpath helper)在按照xpath語法修改(我基本是用該插件來驗證寫的語句對不對)

 

最后,如果你因為防火牆的問題,糾結上不了chrome應用商店,安裝不了xpath helper。想辦法翻牆即可。

 

文中有錯誤,或改善的地方,歡迎指正,大家一起探討

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM