很多時候都需要用到代理ip,一個簡單的方式就是寫爬蟲到網絡上爬。這里以 西刺代理 http://www.xicidaili.com/ 為例。
零、簡單從瀏覽器看下網頁時怎么打開的:
這里以chrome瀏覽器為例,按f12打開開發者工具,點擊Network開始記錄請求。然后在地址欄輸入 http://www.xicidaiil.com/nn 按回車,可看見下圖:

在右邊的name一欄里可以看到打開這個網頁時依次有哪些請求(因為打開網頁並不止打開了這個網頁,還有網頁包含的引用的js\css\jpg文件等)。點下nn,右邊顯示這次請求的總體情況(General),響應頭(Response Headers)請求頭(Requests Heades)。瀏覽器通過對這些響應的渲染得到我們看到的畫面。在比較簡單的情況下,網頁的內容不在js等的額外請求中,也就是說,就在nn的響應中,我們可以得到我們需要的全部文字內容。我們需要做的,就是向瀏覽器請求這些內容並且用程序將我們需要的內容‘摘’出來並存儲好。
一、requests庫快速入門:
寫爬蟲python有個很強大的庫,requests(中文教程鏈接:http://docs.python-requests.org/zh_CN/latest/index.html)。使用pip即可安裝。
我們寫一個簡單的例子:
1 import requests 2 r = requests.get('http://119.29.27.158/ip123456789') 3 print r.content
該鏈接是一個檢測代理的網頁,現在我們先不用管怎么用,程序的輸出是: 1.2.3.4;None;None (假定你的ip是1.2.3.4)。我們用瀏覽器打開該網址,可以看到網頁的內容跟程序的輸出是一毛一樣的。(因為為了方便起見,這個網頁省去了結構。所以程序的輸出跟瀏覽器渲染后的內容一樣。)
使用requests庫下載網頁就是這么簡單。不過很多時候網址並不是很歡迎爬蟲來,這時候我們就要把自己偽裝成瀏覽器。而最簡單的偽裝就是定制請求頭。請求頭是一個字典,然后在requests類的get方法中作為headers參數出入。一般需要定制的參數有三個,user-agent\host\referer。下面是例子:
1 d = {} 2 d['user-agent'] = 'Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13B143 Safari/601.1' 3 d['Host'] = 'www.xicidaili.com' 4 d['Referer'] ='http://www.xicidaili.com/nn/'
5 r = requests.get('http://www.xicidaili.com/nn/1',headers = d)
6 print r.status_code #200
不過有一點需要注意的是host要跟網頁的站點對應,不然會響應5XX 服務器錯誤響應。如果不知道怎么填寫可以參照上圖中 Requests Headers 的內容。通過判斷r.status_code可以判斷是正常訪問還是被服務器拉黑了。如果響應碼是503則是被識別為爬蟲,可以休息一下了。
二、多線程及Queue模塊
爬蟲是io密集型,挺適合多線程的。線程間用Queue隊列共享內容。這個另外寫一篇吧
三、完整源碼
# -*- coding:utf-8 -*- from bs4 import BeautifulSoup as bs import time import requests import bs4 import pymysql import random import json import os from Queue import Queue import threading
#初始化user-agent庫文件,用閉包不斷生成ua def getUa(): ua = [] if not os.path.isfile('ua.txt'): with open('ua.txt','w') as f: while True: line = raw_input('init ua,press Enter to finish:') if line == '': break f.write(line) with open('ua.txt','r') as f: for i in f: ua.append(i[:-1]) lens = len(ua) def getUa1(ua=ua,lens=lens): index = random.randrange(lens) return ua[index] return getUa1
#初始化數據庫文件,返回數據庫配置字典 def getIni(): if os.path.isfile('shujuku.ini'): f = open('shujuku.ini','r') d = json.loads(f.readline()) f.close() else: f = open('shujuku.ini','w') d = {} while True: d['host'] = raw_input('host:') d['user'] = raw_input('use name:') d['passwd'] = raw_input('password:') d['type'] = raw_input('mysql?:') d['db'] = raw_input('database:') d['table'] = raw_input('table:') conform = raw_input('press ENTER to conform:') if conform == '': break f.write(json.dumps(d)) f.close() os.system('chmod 660 shujuku.ini') return d
#初始化數據庫,返回游標 def getTable(d): conn = pymysql.connect(host =d[u'host'],user=d[u'user'],passwd=d[u'passwd'],db=d[u'type'],charset='utf8') cur = conn.cursor() cur.execute('USE '+d[u'db']) table = d[u'table'] return conn,cur,table
#釋放游標 def closeTable(conn,cur): cur.close() conn.close()
#從dbQ隊列讀取並寫入數據庫,打log def dbWrite(cur,table,dbQ,logQ): while True: logQ.put('new db write %s'%time.ctime(),1) d,key = dbQ.get(1) try: num = cur.execute('SELECT %s FROM %s WHERE %s = "%s"'%(key,table,key,d[key])) except: continue if num != 0 : continue #exist keys = [i for i in d.keys()] values = [d[i].encode('utf-8') for i in keys] keys = unicode(keys)[1:-1].replace("'",'').encode('utf-8') values = str(values)[1:-1].replace("'",'"') s = 'INSERT INTO %s (%s) VALUES (%s);'%(table,keys,values) try: cur.execute(s) cur.connection.commit() except: logQ.put("error:insert:%s %s"%(s,time.ctime()),1)
#數據庫support字段為0表示還沒驗證過的ip def dbRead(cur,table,num): num = cur.execute('SELECT ip FROM %s WHERE support = 0 LIMIT %d'%(table,num)) return cur.fetchall()
#模仿scrapy生成待抓取列表 def getUrl(todo): todo = todo def iters(todo=todo): if todo!= []: if todo[0][1] == 0: todo.pop(0) url = todo[0][0] + str(todo[0][1]) todo[0][1] -= 1 return unicode(url) return iters
#生成url的線程 def writeUrlQ(urlQ,todo,logQ): urlF = getUrl(todo) while True: logQ.put('new url %s'%time.ctime(),1) urls = urlF() if urls == None: break urlQ.put(urls,1)
#生成ua的線程 def writeUaQ(uaQ,logQ): uas = getUa() while True: logQ.put('new ua %s'%time.ctime(),1) uaQ.put(uas(),1)
#打log的線程 def writeLogQ(logQ): with open('daili.log','w') as f: while True: logs = logQ.get(1) logs = logs + '\n' f.write(unicode(logs).encode('utf-8')) f.flush()
#在抓取的最后將沒抓取到的頁面再抓一次 def solveWrong(urlQ,wrong): while wrong!= []: urlQ.put(wrong.pop(),1)
#抓取 def parse(urlQ,uaQ,logQ,cur,table,wrong,dbQ): d1 = {} d1['host'] = 'www.xicidaili.com' d1['user-agent']= uaQ.get(1)
d1['Connection'] = 'Keep-alive'
d1['Cache-Control'] = 'max-age = 0'
d1['Update-Insecure-Requests'] = '1'
d1['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp.*/*;q=0.8'
d1['Accept-Encoding'] = 'gzip,deflate,sdch'
d1['Accept-Language'] = 'zh-CN,zh;q=0.8'
r = requests.Session()
sleepT = 3600 發現被ban時的睡眠時間 while True: logQ.put('new parse %s'%time.ctime(),1) urls = urlQ.get(1) ref = urls.split('/') if int(ref[-1] ) >1: ref[-1] = unicode(int(ref[-1])-1) ref = '/'.join(ref) d1['referer'] =ref try: res = r.get(urls,headers = d1,timeout=5) except: logQ.put("Error:timeout:%s"%urls) d1['user-agent']= uaQ.get(1) continue
#如果頁面果斷或者不是200則可能有問題 if len(res.content) < 1000 or res.status_code != 200: logQ.put('Wrong: url is: %s,status is %s,ua is %s,time:%s '%(urls,str(res.status_code),d1['user-agent'],time.ctime()),1) wrong.append(urls) r = requests.Session() d1['user-agent']= uaQ.get(1) if res.status_code == 503:
sleepT += 1800 time.sleep(sleepT) #感覺是直接禁ip了,換ua頭也沒辦法,休息一下吧 continue #使用bs4解析 text = ''.join(res.content.split('\n')) b = bs(text,'lxml') for i in b.table.children: if type(i) is bs4.element.Tag: l = i.findAll('td') if len(l)<5: continue ip = l[1].get_text()+':'+l[2].get_text() location = ''.join(l[3].get_text().split(' ')) d = {'ip':ip,'location':location,'support':'0'} dbQ.put((d,'ip')) time.sleep(3)
#驗證ip
def check(cur,table,logQ):
while True:
ret = dbRead(cur,table,20)
for i in ret:
ip = i[0]
proxies = {'http':ip}
try:
r = requests.get('http://119.29.27.158/ip123456789',proxies = proxies,timeout = 5)
if (r.content.split(':')[0] == ip.split(':')[0]) and (r.content.split(':')[1] == 'None') and (r.content.split(':')[2] == 'None'):
cur.execute('UPDATE ip SET support = "1" WHERE ip = "%s"'%ip)
logQ.put("get %s %s"%(ip,time.ctime()))
else:
cur.execute('UPDATE ip SET support = "2" WHERE ip = "%s"'%ip)
logQ.put("miss1 %s %s"%(ip,time.ctime()))
except:
print 'timeout'
cur.execute('UPDATE ip SET support = "2" WHERE ip = "%s"'%ip)
logQ.put("miss2 %s %s"%(ip,time.ctime()))
finally:
print cur.fetchone()
cur.connection.commit()
if len(ret)<20:
print 'check done'
break
#待抓取列表 todo =[[ 'http://www.xicidaili.com/nn/',145]] urlQ = Queue(32) logQ = Queue(32) uaQ = Queue(4) dbQ = Queue(32) checkQ = Queue(32) threads = [] wrong = [] d = getIni() conn,cur,table = getTable(d) threads.append(threading.Thread(target=writeUrlQ ,args = (urlQ,todo,logQ))) threads.append(threading.Thread(target= writeUaQ,args = (uaQ,logQ))) threads.append(threading.Thread(target= writeLogQ,args = (logQ,))) threads.append(threading.Thread(target= dbWrite,args = (cur,table,dbQ,logQ))) for i in range(3): threads.append(threading.Thread(target= parse,args = (urlQ,uaQ,logQ,cur,table,wrong,dbQ))) for i in threads: i.start() threads[0].join() threads.append(threading.Thread(target= solveWrong,args = (urlQ,wrong))) threads[-1].start() threads.append(threading.Thread(target= check,args = (cur,table,logQ))) threads[-1].start() threads[-1].join() closeTable(conn,cur)
最后在數據庫里:
SELECT count(ip) FROM table WHERE support = 1;
看看抓了多少可以用的ip。
記:網站默認排序時按照可用,所以抓前面200頁左右變足夠了。並且抓200頁左右就會開始ban ip,換ua沒用。在ban期間再訪問會刷新ban的時間……
有時間再重構一次……
