請初學者作為參考,不建議高手看這個浪費時間】
前兩篇大概講述了scrapy的安裝及工作流程。這篇文章主要以一個實例來介紹scrapy的開發流程,本想以教程自帶的dirbot作為例子,但感覺大家應該最先都嘗試過這個示例,應該都很熟悉,這里不贅述,所以,將用筆者自己第一個較為完整的抓取程序作為示例作為講解。
首先,要大規模抓取一個網站的內容,必要的資源便是代理ip這一資源,如果不使用代理ip,又追求抓取的速度,很可能會被被抓網站發現行蹤並封掉抓取機,所以抓取大量可用的代理ip便是我們第一個任務。
大概這個爬蟲要實現以下三個功能:
1. 抓取代理ip,端口信息
2. 驗證代理ip,判斷其透明性
3. 將可用的代理ip持久化到文件中以供后續抓取程序使用
http://www.cnproxy.com/ 代理服務器網便是一個很好的代理ip的來源,簡單看一下,共有12個頁面,頁面格式相同:
http://www.cnproxy.com/proxy1.html
…
http://www.cnproxy.com/proxy10.html
http://www.cnproxy.com/proxyedu1.html
http://www.cnproxy.com/proxyedu2.html
准備就緒,下面正式開始:
1. 定義item,根據需求,抓取的item最后應該包含如下信息才好用:
#前4個數據為頁面可以直接獲取的
ip地址
端口
協議類型
地理位置
#后三個數據為pipeline中后期得到的數據,很有用
代理類型
延遲
時間戳
所以定義代碼如下:
1 # Define here the models for your scraped items 2 # 3 # See documentation in: 4 # http://doc.scrapy.org/topics/items.html 5 6 from scrapy.item import Item, Field 7 8 class ProxyItem(Item): 9 address = Field() 10 port = Field() 11 protocol = Field() 12 location = Field() 13 14 type = Field() # 0: anonymity #1 nonanonymity 15 delay = Field() # in second 16 timestamp = Field()
2. 定義爬蟲
爬蟲中做的主要工作就是設置初始化urls,即【種子】,【可不是那種種子~是這種種子~】
然后在默認的parse函數中使用xpath可以輕松的獲得所需要的字段,比如
addresses = hxs.select('//tr[position()>1]/td[position()=1]').re('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}')
可以獲得ip信息的數組
locations = hxs.select('//tr[position()>1]/td[position()=4]').re('<td>(.*)<\/td>')
可以獲得地理位置信息的數組
唯一有點麻煩的就是端口信息,網站站長可能想到數據會被抓取的可能,所以端口的輸出使用了js輸出,這確實增加了抓取的難度,scrapy和一般的爬蟲是不具備瀏覽器的javascript解釋器的,也不會執行這些js代碼,所以,爬蟲拿到的html代碼中的端口號好沒有被輸出出來~
源碼是這個樣子
所以,一定在html的上部,會有這些‘r’的定義,不難發現
經過幾次刷新,發現這些定義並不是動態的,所以就簡單些,直接拿到代碼中的+r+d+r+d信息,將+號替換為空,將r替換成8,d替換成0即可,所以可以聲明下邊這樣的一個map
port_map = {'z':'3','m':'4','k':'2','l':'9','d':'0','b':'5','i':'7','w':'6','r':'8','c':'1','+':''}
具體的爬蟲代碼如下
1 from scrapy.contrib.spiders import CrawlSpider, Rule 2 from scrapy.selector import HtmlXPathSelector 3 from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 4 from proxy.items import ProxyItem 5 import re 6 7 class ProxycrawlerSpider(CrawlSpider): 8 name = 'cnproxy' 9 allowed_domains = ['www.cnproxy.com'] 10 indexes = [1,2,3,4,5,6,7,8,9,10] 11 start_urls = [] 12 for i in indexes: 13 url = 'http://www.cnproxy.com/proxy%s.html' % i 14 start_urls.append(url) 15 start_urls.append('http://www.cnproxy.com/proxyedu1.html') 16 start_urls.append('http://www.cnproxy.com/proxyedu2.html') 17 18 def parse(self, response): 19 hxs = HtmlXPathSelector(response) 20 addresses = hxs.select('//tr[position()>1]/td[position()=1]').re('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}') 21 protocols = hxs.select('//tr[position()>1]/td[position()=2]').re('<td>(.*)<\/td>') 22 locations = hxs.select('//tr[position()>1]/td[position()=4]').re('<td>(.*)<\/td>') 23 ports_re = re.compile('write\(":"(.*)\)') 24 raw_ports = ports_re.findall(response.body); 25 port_map = {'z':'3','m':'4','k':'2','l':'9','d':'0','b':'5','i':'7','w':'6','r':'8','c':'1','+':''} 26 ports = [] 27 for port in raw_ports: 28 tmp = port 29 for key in port_map: 30 tmp = tmp.replace(key, port_map[key]); 31 ports.append(tmp) 32 items = [] 33 for i in range(len(addresses)): 34 item = ProxyItem() 35 item['address'] = addresses[i] 36 item['protocol'] = protocols[i] 37 item['location'] = locations[i] 38 item['port'] = ports[i] 39 items.append(item) 40 return items 41 ~
3. 執行pipeline,過濾並檢查抓取到的代理ip,並將其持久化到文件中
一般性的校驗過程我就不贅述了,直接介紹如果驗證代理可用性及透明性的方法。
這需要另一個cgi程序的幫助。簡單來說一個代理是否透明,就是在做中轉的時候是否會將源ip放到請求包中並能夠被被抓取方獲取,如果能,就說明這個代理不是透明的,使用的時候就要多留意。
一般非透明代理ip會將源ip放到HTTP_X_FORWARDED_FOR字段中,為了更嚴謹些,另一個cgi腳本將服務器能獲取到的所有跟ip有關的數據echo出來,php代碼如下:
1 <?php 2 3 echo "PROXYDETECTATION</br>"; 4 echo "REMOTE_ADDR</br>"; 5 var_dump($_SERVER['REMOTE_ADDR']); 6 echo "</br>"; 7 echo "env_REMOTE_ADDR</br>"; 8 var_dump(getenv('REMOTE_ADDR')); 9 echo "</br>"; 10 echo "env_HTTP_CLIENT_IP</br>"; 11 var_dump(getenv('HTTP_CLIENT_IP')); 12 echo "</br>"; 13 echo "HTTP_CLIENT_IP</br>"; 14 var_dump($_SERVER['HTTP_CLIENT_IP']); 15 echo "</br>"; 16 echo "HTTP_X_FORWARDED_FOR</br>"; 17 var_dump($_SERVER['HTTP_X_FORWARDED_FOR']); 18 echo "</br>"; 19 echo "HTTP_X_FORWARDED</br>"; 20 var_dump($_SERVER['HTTP_X_FORWARDED']); 21 echo "</br>"; 22 echo "HTTP_X_CLUSTER_CLIENT_IP</br>"; 23 var_dump($_SERVER['HTTP_X_CLUSTER_CLIENT_IP']); 24 echo "</br>"; 25 echo "HTTP_FORWARDED_FOR</br>"; 26 var_dump($_SERVER['HTTP_FORWARDED_FOR']); 27 echo "</br>"; 28 echo "HTTP_FORWARDED</br>"; 29 var_dump($_SERVER['HTTP_FORWARDED']); 30 echo "</br>"; 31 32 ?>
假設這個服務地址為http://xxx.xxx.xxx.xxx/apps/proxydetect.php
那么pipelines的代碼如下
1 # Define your item pipelines here 2 # Don't forget to add your pipeline to the ITEM_PIPELINES setting 3 # See: http://doc.scrapy.org/topics/item-pipeline.html 4 from scrapy.exceptions import DropItem 5 import re 6 import urllib 7 import urllib2 8 import time 9 import exceptions 10 import socket 11 class ProxyPipeline(object): 12 def process_item(self, item, spider): 13 port = item['port'] 14 port_re = re.compile('\d{1,5}') 15 ports = port_re.findall(port) 16 if len(ports) == 0: 17 raise DropItem("can not find port in %s" % item['port']) 18 else: 19 item['port'] = ports[0] 20 #profiling the proxy 21 #detect_service_url = 'http://xxx.xxx.xxx.xxx:pppp/apps/proxydetect.php' 22 detect_service_url = 'http://xxx.xxx.xxx.xxx/apps/proxydetect.php' 23 local_ip = 'xxx.xxx.xxx.xxx' 24 proxy_ = str('http://%s:%s' % (str(item['address']), str(item['port']))) 25 proxies = {'http':proxy_} 26 begin_time = time.time() 27 timeout = 1 28 socket.setdefaulttimeout(timeout) 29 try: 30 data = urllib.urlopen(detect_service_url, proxies=proxies).read() 31 except exceptions.IOError: 32 raise DropItem("curl download the proxy %s:%s is bad" % (item['address'],str(item['port']))) 33 34 end_time = time.time() 35 if '' == data.strip(): 36 raise DropItem("data is null the proxy %s:%s is bad" % (item['address'],str(item['port']))) 37 if data.find('PROXYDETECTATION') == -1: 38 raise DropItem("wrong response the proxy %s:%s is bad" % (item['address'],str(item['port']))) 39 if data.find('PROXYDETECTATION') != -1: 40 if data.find(local_ip) == -1: 41 item['type'] = 'anonymity' 42 else: 43 item['type'] = 'nonanonymity' 44 item['delay'] = str(end_time - begin_time) 45 item['timestamp'] = time.strftime('%Y-%m-%d',time.localtime(time.time())) 46 47 #record the item info 48 fp = open('/home/xxx/services_runenv/crawlers/proxy/proxy/data/proxies.txt','a') 49 line = str(item['timestamp']) + '\t' + str(item['address']) + '\t' + str(item['port']) + '\t' + item['type'] + '\t' + str(item['delay']) + '\n' 50 fp.write(line) 51 fp.close() 52 return item
其中local_ip為抓取服務器本機的ip地址
4. 最后一個要說的就是setting.py配置文件,大家看具體代碼吧
1 # Scrapy settings for proxy project 2 # 3 # For simplicity, this file contains only the most important settings by 4 # default. All the other settings are documented here: 5 # 6 # http://doc.scrapy.org/topics/settings.html 7 # 8 9 BOT_NAME = 'proxy' 10 BOT_VERSION = '1.0' 11 12 SPIDER_MODULES = ['proxy.spiders'] 13 NEWSPIDER_MODULE = 'proxy.spiders' 14 USER_AGENT = '%s/%s' % (BOT_NAME, BOT_VERSION) 15 16 DOWNLOAD_DELAY = 0 17 DOWNLOAD_TIMEOUT = 30 18 19 ITEM_PIPELINES = [ 20 'proxy.pipelines.ProxyPipeline' 21 ] 22 CONCURRENT_ITEMS = 100 23 CONCURRENT_REQUESTS_PER_SPIDER = 64 24 CONCURRENT_SPIDERS = 128 25 26 LOG_ENABLED = True 27 LOG_ENCODING = 'utf-8' 28 LOG_FILE = '/home/xxx/services_runenv/crawlers/proxy/proxy/log/proxy.log' 29 LOG_LEVEL = 'DEBUG' 30 LOG_STDOUT = False
最后秀一下抓取到的代理ip數據
好了,這篇就這些,下一篇將介紹如果使用代理ip作為媒介,放心的去大規模抓取網站數據,晚安。