【python】【scrapy】使用方法概要(四)


【請初學者作為參考,不建議高手看這個浪費時間】

 

上一篇文章,我們抓取到了一大批代理ip,本篇文章介紹如何實現downloaderMiddleware,達到隨即使用代理ip對目標網站進行抓取的。

 

抓取的目標網站是現在炙手可熱的旅游網站 www.qunar.com, 目標信息是qunar的所有seo頁面,及頁面的seo相關信息。

qunar並沒有一般網站具有的 robots.txt文件,所以無法利用列表進行抓取,但是,可以發現,qunar的seo頁面主要部署在

http://www.qunar.com/routes/  下,這個頁面為入口文件,由此頁面及此頁面上所有帶有routes的鏈接開始遞歸的抓取所有帶有routes/字段的鏈接即可。

 

開始吧

目標信息為目標站點的seo信息,所以為head中的meta和description字段。

 1 # Define here the models for your scraped items
 2 #
 3 # See documentation in:
 4 # http://doc.scrapy.org/topics/items.html
 5 
 6 from scrapy.item import Item, Field
 7 
 8 class SitemapItem(Item):
 9     # define the fields for your item here like:
10     # name = Field()
11     url = Field()
12     keywords = Field()
13     description = Field()

因為要使用代理ip,所以需要實現自己的downloadermiddlerware,主要功能是從代理ip文件中隨即選取一個ip端口作為代理服務,代碼如下

 1 import random
 2 
 3 class ProxyMiddleware(object):
 4     def process_request(self, request, spider):
 5         fd = open('/home/xxx/services_runenv/crawlers/sitemap/sitemap/data/proxy_list.txt','r')
 6         data = fd.readlines()
 7         fd.close()
 8         length = len(data)
 9         index  = random.randint(0, length -1)
10         item   = data[index]
11         arr    = item.split(',')
12         request.meta['proxy'] = 'http://%s:%s' % (arr[0],arr[1])

 

最重要的還是爬蟲,主要功能是提取頁面所有的鏈接,把滿足條件的url實例成Request對象並yield, 同時提取頁面的keywords,description信息,以item的形式yield,代碼如下:

 1 from scrapy.selector import HtmlXPathSelector
 2 from sitemap.items import SitemapItem
 3 
 4 import urllib
 5 import simplejson
 6 import exceptions
 7 import pickle
 8 
 9 class SitemapSpider(CrawlSpider):
10     name = 'sitemap_spider'
11     allowed_domains = ['qunar.com']
12     start_urls = ['http://www.qunar.com/routes/']
13 
14     rules = (
15         #Rule(SgmlLinkExtractor(allow=(r'http://www.qunar.com/routes/.*')), callback='parse'),
16         #Rule(SgmlLinkExtractor(allow=('http:.*/routes/.*')), callback='parse'),
17     )
18 
19     def parse(self, response):
20         item = SitemapItem()
21         x         = HtmlXPathSelector(response)
22         raw_urls  = x.select("//a/@href").extract()
23         urls      = []
24         for url in raw_urls:
25             if 'routes' in url:
26                 if 'http' not in url:
27                     url = 'http://www.qunar.com' + url
28                 urls.append(url)
29 
30         for url in urls:
31             yield Request(url)
32 
33         item['url']         = response.url.encode('UTF-8')
34         arr_keywords        = x.select("//meta[@name='keywords']/@content").extract()
35         item['keywords']    = arr_keywords[0].encode('UTF-8')
36         arr_description     = x.select("//meta[@name='description']/@content").extract()
37         item['description'] = arr_description[0].encode('UTF-8')
38 
39         yield item

 

pipe文件比較簡單,只是把抓取到的數據存儲起來,代碼如下

 1 # Define your item pipelines here
 2 #
 3 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
 4 # See: http://doc.scrapy.org/topics/item-pipeline.html
 5 
 6 class SitemapPipeline(object):
 7     def process_item(self, item, spider):
 8         data_path = '/home/xxx/services_runenv/crawlers/sitemap/sitemap/data/output/sitemap_data.txt'
 9         fd = open(data_path, 'a')
10         line = str(item['url']) + '#$#' + str(item['keywords']) + '#$#' + str(item['description']) + '\n'
11         fd.write(line)
12         fd.close
13         return item

最后附上的是setting.py文件

# Scrapy settings for sitemap project
#
# For simplicity, this file contains only the most important settings by
# default. All the other settings are documented here:
#
#     http://doc.scrapy.org/topics/settings.html
#

BOT_NAME = 'sitemap hello,world~!'
BOT_VERSION = '1.0'

SPIDER_MODULES = ['sitemap.spiders']
NEWSPIDER_MODULE = 'sitemap.spiders'
USER_AGENT = '%s/%s' % (BOT_NAME, BOT_VERSION)

DOWNLOAD_DELAY = 0

ITEM_PIPELINES = [
    'sitemap.pipelines.SitemapPipeline'
]

DOWNLOADER_MIDDLEWARES = {
    'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
    'sitemap.middlewares.ProxyMiddleware': 100,
    }

CONCURRENT_ITEMS = 128
CONCURRENT_REQUEST = 64
CONCURRENT_REQUEST_PER_DOMAIN = 64


LOG_ENABLED = True
LOG_ENCODING = 'utf-8'
LOG_FILE = '/home/xxx/services_runenv/crawlers/sitemap/sitemap/log/sitemap.log'
LOG_LEVEL = 'DEBUG'
LOG_STDOUT = False

 

對scrapy的介紹將告一段落,更復雜的應用還沒有接觸過,想等看完redis的源碼,再來研究下scrapy的源碼~~ 希望通過分享能給正在入門scrapy的童鞋帶來幫助~

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM