Python 創建項目時配置 Scrapy 自定義模板


Python 創建項目時配置 Scrapy 自定義模板

1.找到 Scrapy 自定義模板相關文件

python安裝目錄+\Python\Lib\site-packages\scrapy\templates\project\module

 

2.開始編寫 Python 自定義模板

settings.py.tmpl:Python項目框架的配置類

# Scrapy settings for $project_name project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = '$project_name'

SPIDER_MODULES = ['$project_name.spiders']
NEWSPIDER_MODULE = '$project_name.spiders'

'''
Scrapy 提供 5 層 Log Level:
CRITICAL - 嚴重錯誤(critical)
ERROR - 一般錯誤(regular errors)
WARNING - 警告信息(warning messages)
INFO - 一般信息(informational messages)
DEBUG - 調試信息(debugging messages)
'''
LOG_LEVEL = 'WARNING'

'''
有一些網站不喜歡被爬蟲程序訪問,所以會檢測連接對象;
如果是爬蟲程序,也就是非人點擊訪問,它就會不讓你繼續訪問;
所以為了要讓程序可以正常運行,需要隱藏自己的爬蟲程序的身份。
此時,可以通過設置User Agent的來達到隱藏身份的目的,User Agent的中文名為用戶代理,簡稱UA。
'''
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = '$project_name (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'
'''
USER_AGENT = {"User-Agent": random.choice(
    ['Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6',
     'Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
     'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
     'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
     'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)',
     'Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)',
     'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20',
     'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6',
     'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1',
     'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)',
     'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
     'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)',
     'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1',
     'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.3 Mobile/14E277 Safari/603.1.30',
     'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'])}
'''


'''
Obey robots.txt rules
robots.txt 是遵循 Robot協議 的一個文件,它保存在網站的服務器中
作用:告訴搜索引擎爬蟲,本網站哪些目錄下的網頁 不希望 你進行爬取收錄。在Scrapy啟動后,會在第一時間訪問網站的 robots.txt 文件,然后決定該網站的爬取范圍。
當然,我們並不是在做搜索引擎,而且在某些情況下我們想要獲取的內容恰恰是被 robots.txt 所禁止訪問的。所以,某些時候,我們就要將此配置項設置為 False ,拒絕遵守 Robot協議 !
'''
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1  # 延遲下載,防止被封
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    '$project_name.middlewares.${ProjectName}SpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    '$project_name.middlewares.${ProjectName}DownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# 禁用擴展(Disabling an extension)(avoid twisted.internet.error.CannotListenError)
EXTENSIONS = {
    'scrapy.extensions.telnet.TelnetConsole': None,
}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    '$project_name.pipelines.${ProjectName}Pipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
settings.py.tmpl View Code

pipelines.py.tmpl:Python項目框架的管道控制類

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
import pymysql

'''
@Author: System
@Date: newTimes
@Description: TODO 管道配置
'''
class ${ProjectName}Pipeline:
    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 配置數據庫連接>>>打開鏈接
    '''

    def open_spider(self, spider):
        # 數據庫連接
        self.conn = pymysql.connect(
            host='127.0.0.1',  #服務器IP
            port=3306,  # 服務器端口號:不是字符串不需要加引號。
            user='root',
            password='123456',
            db='database',
            charset='utf8')
        # 得到一個可以執行SQL語句的光標對象
        self.cursor = self.conn.cursor()  # 執行完畢返回的結果集默認以元組顯示
        print(spider.name, '打開數據庫連接,爬蟲開始了...')

    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 配置數據庫連接>>>關閉鏈接
    '''

    def close_spider(self, spider):
        self.cursor.close()
        self.conn.close()
        # self.file.close()
        print(spider.name, '數據庫連接關閉,爬蟲結束了...')

    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 默認進入此方法
    '''

    def process_item(self, item, spider):
        print("into Pipeline's process_item")
        if spider.name == 'first_py':
            print("into pipeline if")
            self.save_test(item)
        else:
            print("into pipeline else")
        return item

    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 保存test表數據
    '''

    def save_test(self, item):
        print("into save_test")
        # 先檢查數據庫是否存在,不存在則保存
        # 定義將要執行的sql語句
        sql_count = 'select count(id) from test where name = %s'
        # 拼接並執行sql語句
        self.cursor.execute(sql_count, [item['name']])
        # 取到查詢結果>>>取一條
        results = self.cursor.fetchone()
        if 0 == results[0]:
            try:
                '''
                print(item['name'])
                print(item['type'])
                print(item['content'])
                '''
                sql = "insert into test(name, type, content) values(%s, %s, %s)"
                self.cursor.execute(sql, [item['name'], item['type'], item['content']])
                thisId = self.cursor.lastrowid
                print('test表保存成功,id為:' + repr(thisId))
                self.conn.commit()
            except Exception as ex:
                print("出現如下異常%s" % ex)
                print('回滾')
                self.conn.rollback()
pipelines.py.tmpl View Code

middlewares.py.tmpl:Python項目框架的中間件配置類(默認,無需配置)

items.py.tmpl:自定義實體屬性類

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

'''
@Author: System
@Date: newTimes
@Description: TODO 自定義實體屬性
'''
class ${ProjectName}Item(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    pass  # 占位符


'''
@Author: System
@Date: newTimes
@Description: TODO 對應數據庫中的test表
'''


class TestItem(scrapy.Item):
    # 名稱(對應 test 表中的name字段)
    name = scrapy.Field()
    # 類型
    type = scrapy.Field()
    # 詳情內容
    content = scrapy.Field()
items.py.tmpl View Code

spiders>test.py:測試類可配置可不配置

from urllib.request import urlopen

import scrapy
from bs4 import BeautifulSoup

import requests

'''
@Author: System
@Date: newTimes
@Description: TODO 測試爬蟲
'''


class TestSpider(scrapy.Spider):
    # 與 run.py 啟動控制類中的名字一致
    name = 'test'
    # 允許訪問的域名
    allowed_domains = ['baidu.com']
    # 開始爬取的鏈接
    start_urls = ['https://image.baidu.com/']
    # 自定義變量
    link = 'https://image.baidu.com'

    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 默認進入此方法
    '''
    def parse(self, response, link=link):
        '''python抓取數據方式>>>開始'''
        # 第一種:response 獲取
        data = response.text
        # 第二種:requests 獲取
        data = requests.get(link)
        data = data.text
        # 第三種:urlopen 獲取
        data = urlopen(link).read()
        # Beautiful Soup自動將輸入文檔轉換為Unicode編碼,輸出文檔轉換為utf-8編碼
        data = BeautifulSoup(data, "html.parser")
        # 第四種:xpath 解析獲取
        data = response.xpath('//div[@id="endText"]').get()
        # Beautiful Soup自動將輸入文檔轉換為Unicode編碼,輸出文檔轉換為utf-8編碼
        data = BeautifulSoup(data, 'html.parser')
        print(data)
        '''python抓取數據方式>>>結束'''
        # 調用 getLinkContent 方法
        request = scrapy.Request(link, callback=self.getLinkContent)
        # 傳參賦值
        request.meta['link'] = link
        request.meta['data'] = data
        yield request

    '''
    @Author: System
    @Date: newTimes
    @Description: TODO 根據link鏈接封裝並保存數據
    '''
    def getLinkContent(self, response):
        print('開始抓取XXX的鏈接...')
        print(response.meta['link'])
        content = response.xpath('//div[@id="content"]')
        content = "".join(content.extract())
        # 實例化 TestItem 這個類,第一個 name 是在items.py中定義的屬性(Ps:自己導入)
        items = TestItem(name='name',
                         type=1,
                         content=content)
        # 用yield關鍵字把它傳去管道
        yield items
test.py View Code

run.py:Python項目控制統一啟動類

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
'''
@Author: System
@Date: newTimes
@Description: TODO 同一進程運行多個spider
'''
process = CrawlerProcess(get_project_settings())
'''啟動項目開始爬起文件類'''
process.crawl('test')

process.start()
run.py View Code

3.測試 Python 自定義模板 

scrapy 創建 pyProject 新項目:scrapy startproject pyProject

PyCharm 打開剛剛創建的 pyProject 項目

需要修改的點:

1.數據庫配置

2. test 測試類需手動導入(Ps:測試類僅供參考)

     

3.配置 run.py 啟動類

點擊右上角的Add Configurations

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM