我們的這個爬蟲設計來爬取京東圖書(jd.com)。
scrapy框架相信大家比較了解了。里面有很多復雜的機制,超出本文的范圍。
1、爬蟲spider
tips:
1、xpath的語法比較坑,但是你可以在chrome上裝一個xpath helper,輕松幫你搞定xpath正則表達式
2、動態內容,比如價格等是不能爬取到的
3、如本代碼中,評論爬取部分代碼涉及xpath對象的鏈式調用,可以參考
# -*- coding: utf-8 -*- # import scrapy # 可以用這句代替下面三句,但不推薦 from scrapy.spiders import Spider from scrapy.selector import Selector from scrapy import Request from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor from jdbook.items import JDBookItem # 如果報錯是pyCharm對目錄理解錯誤的原因,不影響 class JDBookSpider(Spider): name = "jdbook" allowed_domains = ["jd.com"] # 允許爬取的域名,非此域名的網頁不會爬取 start_urls = [ # 起始url,這里設置為從最大tid開始,向0的方向迭代 "http://item.jd.com/11678007.html" ] # 用來保持登錄狀態,可把chrome上拷貝下來的字符串形式cookie轉化成字典形式,粘貼到此處 cookies = {} # 發送給服務器的http頭信息,有的網站需要偽裝出瀏覽器頭進行爬取,有的則不需要 headers = { # 'Connection': 'keep - alive', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36' } # 對請求的返回進行處理的配置 meta = { 'dont_redirect': True, # 禁止網頁重定向 'handle_httpstatus_list': [301, 302] # 對哪些異常返回進行處理 } def get_next_url(self, old_url): ''' description: 返回下次迭代的url :param oldUrl: 上一個爬去過的url :return: 下次要爬取的url ''' # 傳入的url格式:http://www.heartsong.top/forum.php?mod=viewthread&tid=34 list = old_url.split('/') #用等號分割字符串 old_item_id = int(list[3].split('.')[0]) new_item_id = old_item_id - 1 if new_item_id == 0: # 如果tid迭代到0了,說明網站爬完,爬蟲可以結束了 return new_url = '/'.join([list[0], list[1], list[2], str(new_item_id)+ '.html']) # 構造出新的url return str(new_url) # 返回新的url def start_requests(self): """ 這是一個重載函數,它的作用是發出第一個Request請求 :return: """ # 帶着headers、cookies去請求self.start_urls[0],返回的response會被送到 # 回調函數parse中 yield Request(self.start_urls[0], callback=self.parse, headers=self.headers, cookies=self.cookies, meta=self.meta) def parse(self, response): """ 用以處理主題貼的首頁 :param response: :return: """ selector = Selector(response) item = JDBookItem() extractor = LxmlLinkExtractor(allow=r'http://item.jd.com/\d.*html') link = extractor.extract_links(response) try: item['_id'] = response.url.split('/')[3].split('.')[0] item['url'] = response.url item['title'] = selector.xpath('/html/head/title/text()').extract()[0] item['keywords'] = selector.xpath('/html/head/meta[2]/@content').extract()[0] item['description'] = selector.xpath('/html/head/meta[3]/@content').extract()[0] item['img'] = 'http:' + selector.xpath('//*[@id="spec-n1"]/img/@src').extract()[0] item['channel'] = selector.xpath('//*[@id="root-nav"]/div/div/strong/a/text()').extract()[0] item['tag'] = selector.xpath('//*[@id="root-nav"]/div/div/span[1]/a[1]/text()').extract()[0] item['sub_tag'] = selector.xpath('//*[@id="root-nav"]/div/div/span[1]/a[2]/text()').extract()[0] item['value'] = selector.xpath('//*[@id="root-nav"]/div/div/span[1]/a[2]/text()').extract()[0] comments = list() node_comments = selector.xpath('//*[@id="hidcomment"]/div') for node_comment in node_comments: comment = dict() node_comment_attrs = node_comment.xpath('.//div[contains(@class, "i-item")]') for attr in node_comment_attrs: url = attr.xpath('.//div/strong/a/@href').extract()[0] comment['url'] = 'http:' + url content = attr.xpath('.//div/strong/a/text()').extract()[0] comment['content'] = content time = attr.xpath('.//div/span[2]/text()').extract()[0] comment['time'] = time comments.append(comment) item['comments'] = comments except Exception, ex: print 'something wrong', str(ex) print 'success, go for next' yield item next_url = self.get_next_url(response.url) # response.url就是原請求的url if next_url != None: # 如果返回了新的url yield Request(next_url, callback=self.parse, headers=self.headers, cookies=self.cookies, meta=self.meta)
2、存儲管道:pipelines
tips:
1、本pipelines將爬取的數據存入mongo,比寫本地文件靠譜,特別是多實例或者分布式情況。
# -*- coding: utf-8 -*- import pymongo from datetime import datetime from scrapy.exceptions import DropItem class JDBookPipeline(object): def __init__(self, mongo_uri, mongo_db, mongo_coll): self.ids = set() self.mongo_uri = mongo_uri self.mongo_db = mongo_db self.mongo_coll = mongo_coll @classmethod def from_crawler(cls, crawler): return cls( mongo_uri=crawler.settings.get('MONGO_URI'), mongo_db=crawler.settings.get('MONGO_DB'), mongo_coll=crawler.settings.get('MONGO_COLL') ) def open_spider(self, spider): self.client = pymongo.MongoClient(self.mongo_uri) # 數據庫登錄需要帳號密碼的話 # self.client.admin.authenticate(settings['MINGO_USER'], settings['MONGO_PSW']) self.db = self.client[self.mongo_db] self.coll = self.db[self.mongo_coll] def close_spider(self, spider): self.client.close() def process_item(self, item, spider): if item['_id'] in self.ids: raise DropItem("Duplicate item found: %s" % item) if item['channel'] != u'圖書': raise Exception('not book') else: #self.coll.insert(dict(item)) # 如果你不想鎖死collection名稱的話 self.ids.add(item['_id']) collection_name = item.__class__.__name__ + '_' + str(datetime.now().date()).replace('-', '') self.db[collection_name].insert(dict(item)) return item
3、數據結構:items
tips:
1、看到scrapy的item就笑了,這不是django么
# -*- coding: utf-8 -*- import scrapy class JDBookItem(scrapy.Item): _id = scrapy.Field() title = scrapy.Field() url = scrapy.Field() keywords = scrapy.Field() description = scrapy.Field() img = scrapy.Field() channel = scrapy.Field() tag = scrapy.Field() sub_tag = scrapy.Field() value = scrapy.Field() comments = scrapy.Field()
4、scrapyd部署
很多朋友想做分布式爬蟲,比如通過celery任務調起scarpy爬蟲任務。
但是很不幸,scrapy想實現這樣的方式並不簡單。一個比較好的辦法是用scrapyd管理爬蟲任務。
你需要保證你的python環境安裝了3個東西。
source kangaroo.env/bin/activate
pip install scrapy scrapyd scrapyd-client
在你的spider路徑下啟動scrapyd守護進程。
scrapyd
下面注冊你的spider,先寫配置文件scrapy.cfg
# Automatically created by: scrapy startproject # # For more information about the [deploy] section see: # https://scrapyd.readthedocs.org/en/latest/deploy.html [settings] default = jdbook.settings [deploy:jdbook] url = http://localhost:6800/ project = jdbook
開始注冊
#注冊spider scrapyd-deploy -p jdbook -d jdbook #列出已注冊的spider scrapyd-deploy -l
輸出:jdbook http://localhost:6800/
這樣就已經注冊好了
開始/停止爬蟲:
curl -XPOST http://10.94.99.55:6800/schedule.json? -d project=jdbook -d spider=jdbook
輸出:{"status": "ok", "jobid": "9d50b3dcabfc11e69aa3525400128d39", "node_name": "kvm33093.sg"}
curl -XPOST http://10.94.99.55:6800/cancel.json? -d project=jdbook -d job=9d50b3dcabfc11e69aa3525400128d39
輸出:{"status": "ok", "prevstate": "running", "node_name": "kvm33093.sg"}
至此,你可以在celery任務中調用爬蟲了,只需要發送如上url就可以。
而各個爬蟲可以存放在不同的機器上,實現分布式爬取。