1、XPath是什么?
XPath即XML路徑語言(XML Path Language),它是一種用來確定xml文檔中某部分位置的語言。XPath本身遵循w3c標准。
xml文檔(html屬於xml)是由一系列結點構成的樹。例如從網絡上爬取的一段html代碼:
<div class="post-114638 post type-post status-publish format-standard hentry category-it-tech tag-linux odd" id="post-114638">
<!-- BEGIN .entry-header -->
<div class="entry-header">
<h1>能從遠程獲得樂趣的 Linux 命令</h1>
</div>
<div class="entry-meta">
<p class="entry-meta-hide-on-mobile">
2019/01/13 · <a href="http://blog.jobbole.com/category/it-tech/" rel="category tag">IT技術</a>
· <a href="http://blog.jobbole.com/tag/linux/">Linux</a>
</p>
</div>
<!-- END .entry-meta -->
<div class="entry"></div>
<div class="textwidget"></div>
</div>
2、利用Scrapy提供的shell進行XPath的調試
自己構建Selector對象
構建Selector對象有多種方式,這里我們只介紹一種簡單易用的,便於我們進行XPath的調試、學習即可。
- 創建Selector對象:
In [1]: from scrapy.selector import Selector
In [2]: body = "<book><author>Tom John</author></book>"
In [3]: selector = Selector(text=body)
In [4]: selector
Out[4]: <Selector xpath=None data='<html><body><book><author>Tom John</auth'>
- 選中&提取數據:
In [5]: selector.xpath('//book/author/text()')
Out[5]: [<Selector xpath='//book/author/text()' data='Tom John'>]
In [40]: selector.xpath('string(//author)')
Out[40]: [<Selector xpath='string(//author)' data='Tom John'>]
正則表達式的使用:
>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()').re('\d*')
['', '1', '', '', '', '']
>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()')
[<Selector xpath='//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()' data=' 1 收藏'>]
>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()').re('\d+')
['1']
>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()').re('\d+')[0]
'1'
>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()').re('.*(\d+).*')[0]
'1'
>>>> response.xpath('//*[@id="post-114638"]/div[3]/div[5]/span[2]/text()').re('.*(\d+).*').group(1)
Traceback (most recent call last):
File "<console>", line 1, in <module>
AttributeError: 'list' object has no attribute 'group'
>>>
利用Scrapy提供的shell
利用scrapy提供的shell調試scrapy shell http://blog.jobbole.com/114638/
:
(Py3_spider) D:\SpiderProject\spider_pjt1>scrapy shell http://blog.jobbole.com/114638/
2019-01-31 10:37:25 [scrapy.utils.log] INFO: Scrapy 1.5.2 started (bot: spider_pjt1)
2019-01-31 10:37:25 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.5, Platform Windows-10-10.0.17763-SP0
2019-01-31 10:37:25 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'spider_pjt1', 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'LOGSTATS_INTERVAL': 0, 'NEWSPIDER_MODULE': 'spider_pjt1.spiders', 'SPIDER_MODULES': ['spider_pjt1.spiders']}
2019-01-31 10:37:25 [scrapy.extensions.telnet] INFO: Telnet Password: 4f8f06a70c3e7ec1
2019-01-31 10:37:25 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole']
2019-01-31 10:37:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-31 10:37:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-31 10:37:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-31 10:37:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-01-31 10:37:26 [scrapy.core.engine] INFO: Spider opened
2019-01-31 10:37:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://blog.jobbole.com/114638/> (referer: None)
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x00000228574EFB70>
[s] item {}
[s] request <GET http://blog.jobbole.com/114638/>
[s] response <200 http://blog.jobbole.com/114638/>
[s] settings <scrapy.settings.Settings object at 0x00000228574EFA90>
[s] spider <JobboleSpider 'jobbole' at 0x22857795d68>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
>>>
接着就可以在這里進行調試,比如:
>>> title = response.xpath('//*[@id="post-114638"]/div[1]/h1')
>>> title
[<Selector xpath='//*[@id="post-114638"]/div[1]/h1' data='<h1>能從遠程獲得樂趣的 Linux 命令</h1>'>]
>>>
>>> title.extract()
['<h1>能從遠程獲得樂趣的 Linux 命令</h1>']
>>> title.extract()[0]
'<h1>能從遠程獲得樂趣的 Linux 命令</h1>'
>>>
由於xpath返回一個selector對象,所以可以接着對其操作:
>>> title.xpath('//div[@class="entry-header"]/h1/text()')
[<Selector xpath='//div[@class="entry-header"]/h1/text()' data='能從遠程獲得樂趣的 Linux 命令'>]
>>> title.xpath('//div[@class="entry-header"]/h1/text()').extract()
['能從遠程獲得樂趣的 Linux 命令']
>>> title.xpath('//div[@class="entry-header"]/h1/text()').extract()[0]
'能從遠程獲得樂趣的 Linux 命令'
>>>
注意text只會獲取到標簽內的文本,在遇到下一個標簽時后面的文本都將被忽略,如:
html代碼:
<div class="post-114638 post type-post status-publish format-standard hentry category-it-tech tag-linux odd" id="post-114638">
<!-- BEGIN .entry-header -->
<div class="entry-header">
<h1>能從遠程獲得樂趣的 Linux 命令</h1>
</div>
<div class="entry-meta">
<p class="entry-meta-hide-on-mobile">
2019/01/13 · <a href="http://blog.jobbole.com/category/it-tech/" rel="category tag">IT技術</a>
· <a href="http://blog.jobbole.com/tag/linux/">Linux</a>
</p>
</div>
<!-- END .entry-meta -->
<div class="entry"></div>
<div class="textwidget"></div>
</div>
text()獲取文本:
>>> response.xpath('//*[@id="post-114638"]/div[2]/p').extract()[0]
'<p class="entry-meta-hide-on-mobile">\r\n\r\n 2019/01/13 · <a href="http://blog.jobbole.com/category/it-tech/" rel="category tag">IT技術</a>\r\n \r\n \r\n\r\n \r\n · <a href="http://blog.jobbole.com/tag/linux/">Linux</a>\r\n \r\n</p>'
>>>
>>> response.xpath('//*[@id="post-114638"]/div[2]/p/text()').extract()[0]
'\r\n\r\n 2019/01/13 · '
>>>
>>> response.xpath('//*[@id="post-114638"]/div[2]/p/text()').extract()[0].replace('·','').strip()
'2019/01/13'
>>>
擴展
分析一下下面兩個文件的代碼:
這是scrapy genspider jobbole blog.jobbole.com
自動生成的文件(spider_pjt1\spider_pjt1\spiders\jobbole.py
):
# -*- coding: utf-8 -*-
import scrapy
class JobboleSpider(scrapy.Spider):
name = 'jobbole'
allowed_domains = ['blog.jobbole.com']
start_urls = ['http://blog.jobbole.com/']
def parse(self, response):
pass
這個類繼承scrapy.Spider
,查看一下scrapy.Spider
的一段代碼(Envs\Py3_spider\Lib\site-packages\scrapy\spiders\__init__.py
):
def start_requests(self):
cls = self.__class__
if method_is_overridden(cls, Spider, 'make_requests_from_url'):
warnings.warn(
"Spider.make_requests_from_url method is deprecated; it "
"won't be called in future Scrapy releases. Please "
"override Spider.start_requests method instead (see %s.%s)." % (
cls.__module__, cls.__name__
),
)
for url in self.start_urls:
yield self.make_requests_from_url(url)
else:
for url in self.start_urls:
yield Request(url, dont_filter=True)
def make_requests_from_url(self, url):
""" This method is deprecated. """
return Request(url, dont_filter=True)
scrapy下載器(DOWNLOADER
)下載完成后會回到接着執行parse()
。parse(self, response)
中的response
和Django中的response
相似。
PyCharm中沒有scrapy的模板,可以自己定義main.py文件來調用命令行完成調試,這里用到scrapy提供的內置函數,調用這個函數可以執行scrapy腳本。下面是main.py文件(spider_pjt1\main.py
):
3、在PyCharm等IDE中進行調試
我們選擇PyCharm為例,其他IDE類似:
引子:
scrapy啟動一個spider的命令:scrapy crawl spider_name
,spider_name和JobboleSpider中的name值一致,注意確保在scrapy.cfg
所在路徑執行命令。
(Py3_spider) D:\SpiderProject>cd spider_pjt1
(Py3_spider) D:\SpiderProject\spider_pjt1>scrapy crawl jobbole
...
ModuleNotFoundError: No module named 'win32api'
提示缺失win32api模塊。
我們安裝pypiwin32
,這個一般只是Windows環境下會出現這個問題。
或者使用豆瓣鏡像源:pip install -i https://pypi.douban.com/simple pypiwin32
(Py3_spider) D:\SpiderProject\spider_pjt1>pip install pypiwin32
Collecting pypiwin32
Downloading https://files.pythonhosted.org/packages/d0/1b/2f292bbd742e369a100c91faa0483172cd91a1a422a6692055ac920946c5/pypiwin32-223-py3-none-any.whl
Collecting pywin32>=223 (from pypiwin32)
Downloading https://files.pythonhosted.org/packages/a3/8a/eada1e7990202cd27e58eca2a278c344fef190759bbdc8f8f0eb6abeca9c/pywin32-224-cp37-cp37m-win_amd64.whl (9.0MB)
100% |████████████████████████████████| 9.1MB 32kB/s
Installing collected packages: pywin32, pypiwin32
Successfully installed pypiwin32-223 pywin32-224
接下來就可以正常啟動對應的爬蟲了:
(Py3_spider) D:\SpiderProject\spider_pjt1>scrapy crawl jobbole
...
2019-01-31 08:13:48 [scrapy.core.engine] INFO: Spider closed (finished)
(Py3_spider) D:\SpiderProject\spider_pjt1>
進入正題:
然后我們main.py
的代碼如下:
# -*- coding: utf-8 -*-
# @Author : One Fine
# @File : main.py
from scrapy.cmdline import execute
import sys
import os
# 設置項目工程的路徑,使scrapy命令在項目工程里面運行
# os.path.abspath(__file__)獲取當前文件的路徑
# os.path.dirname(os.path.abspath(__file__))獲取當前文件所在目錄的父目錄————即項目所在路徑
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
# 調用execute()函數來執行命令,此方法傳遞一個數組作為參數
execute(["scrapy", "crawl", "jobbole"])
接下來設置settings.py
中的ROBOTSTXT_OBEY
參數,將其改為False
,讓scrapy在爬取過程中不要讀取網站的robots.txt文件,不過濾符合robots協議的url。
ROBOTSTXT_OBEY = False
接下來在spider_pjt1\spider_pjt1\spiders\jobbole.py
的parse方法內部打上斷點就可以從main.py
調試scrapy項目了。
注意:F12是全部加載完之后的結構,和直接點擊view page source
可能不一樣——里面的代碼是http請求時就產生的。
下面使用XPath提取網頁數據:
def parse_detail(self, response):
#獲取標題
#可以用//*[@id="post-112614"]/div[1]/h1/text()獲取標簽里面的值
title = response.xpath('//*[@class="entry-header"]/h1/text()').extract()[0]
# print('title',title)
# re1_selector = response.xpath('//div[@class="entry_header"]/h1/text()')
#獲取時間
#獲取字符串的話用time.extract()[0].strip().repalce("·","").strip()
create_date = response.xpath('//*[@class="entry-meta-hide-on-mobile"]/text()').extract()[0].strip().replace("·","").strip()
#獲取點贊數
praise_nums = response.xpath("//span[contains(@class,'vote-post-up')]/h10/text()").extract()[0]
#獲取收藏,此處包含'收藏數'和'收藏'兩個字
fav_nums = response.xpath("//span[contains(@class,'bookmark-btn')]/text()").extract()[0].strip()
match_re = re.match('.*?(\d+).*',fav_nums)
if match_re:
#獲取收藏數
fav_nums = int(match_re.group(1))
else:
fav_nums = 0
#獲取評論數
comment_nums = response.xpath('//*[@class="entry-meta-hide-on-mobile"]/a[2]/text()').extract()[0].strip()
match_re = re.match('.*?(\d+).*', comment_nums)
if match_re:
# 獲取收藏數
comment_nums = int(match_re.group(1))
else:
comment_nums = 0
#獲取文章分類標簽
tag_list = response.xpath("//p[@class='entry-meta-hide-on-mobile']/a/text()").extract()
tag_list = [element for element in tag_list if not element.strip().endswith('評論')]
tag = ','.join(tag_list)
content = response.xpath('//*[@class="entry"]').extract()[0]