用python爬取app照片


首先下載一個斗魚(不下載也可以,url都在這了對吧)

   通過抓包,抓取到一個json的數據包,得到下面的地址

 

  觀察測試可知,通過修改offset值就是相當於app的翻頁

  訪問這個url,返回得到的是一個大字典,字典里面兩個索引,一個error,一個data。而data又是一個長度為20的數組,每個數組又是一個字典。每個字典中又有一個索引,vertical_src。

  我們的目標就是它了!

 1 import urllib.parse
 2 import urllib
 3 import json
 4 import urllib.request
 5 data_info={}
 6 data_info['type']='AUTO'
 7 data_info['doctype']='json'
 8 data_info['xmlVersion']='1.6'
 9 data_info['ue']='UTF-8'
10 data_info['typoResult']='true'
11 head_info={}
12 head_info['User-Agent']='DYZB/2.271 (iphone; iOS 9.3.2; Scale/3.00)'
13 url='http://capi.douyucdn.cn/api/v1/getVerticalRoom?aid=ios&client_sys=ios&limit=20&offset=20'
14 data_info=urllib.parse.urlencode(data_info).encode('utf-8')
15 print(data_info)
16 requ=urllib.request.Request(url,data_info)
17 requ.add_header('Referer','http://capi.douyucdn.cn')
18 requ.add_header('User-Agent','DYZB/2.271 (iphone; iOS 9.3.2; Scale/3.00)')
19 response=urllib.request.urlopen(requ)
20 print(response)
21 html=response.read().decode('utf-8')

這短短20多行代碼就能返回得到json數據了。然后再通過對這json代碼的切片,分離得到每個主播照片的url地址。

然后得到這一頁的照片

 1 import json
 2 import urllib.request
 3 data_info={}
 4 data_info['type']='AUTO'
 5 data_info['doctype']='json'
 6 data_info['xmlVersion']='1.6'
 7 data_info['ue']='UTF-8'
 8 data_info['typoResult']='true'
 9 
1011 url+str(i)='http://capi.douyucdn.cn/api/v1/getVerticalRoom?aid=ios&client_sys=ios&limit=20&offset='+str(x)
12 data_info=urllib.parse.urlencode(data_info).encode('utf-8')
13 print(data_info)
14 requ=urllib.request.Request(url,data_info)
15 requ.add_header('Referer','http://capi.douyucdn.cn')
16 requ.add_header('User-Agent','DYZB/2.271 (iphone; iOS 9.3.2; Scale/3.00)')
17 response=urllib.request.urlopen(requ)
18 print(response)
19 html=response.read().decode('utf-8')
20 '''
21  print(type(dictionary))
22 print(type(dictionary[data]))
23 '''
24 dictionary=json.loads(html)
25 data_arr=dictionary["data"]
26 for i in range(0,19):
27     name=data_arr[i]["nickname"]
28     img_url=data_arr[i]["vertical_src"]
29     print(type(img_url))
30     respon_tem=urllib.request.urlopen(img_url)
31     anchor_img=respon_tem.read()
32     with open('../photos/'+name+'.jpg','wb') as f:
33         f.write(anchor_img)

然后修改一下,讓它有了翻頁的功能

 1 import urllib.parse
 2 import urllib
 3 import json
 4 import urllib.request
 5 data_info={}
 6 data_info['type']='AUTO'
 7 data_info['doctype']='json'
 8 data_info['xmlVersion']='1.6'
 9 data_info['ue']='UTF-8'
10 data_info['typoResult']='true'
11 data_info=urllib.parse.urlencode(data_info).encode('utf-8')
12 
13 for x in range(0,195):
14     url='http://capi.douyucdn.cn/api/v1/getVerticalRoom?aid=ios&client_sys=ios&limit=20&offset='+str(x)
15     print(data_info)
16     requ=urllib.request.Request(url,data_info)
17     requ.add_header('Referer','http://capi.douyucdn.cn')
18     requ.add_header('User-Agent','DYZB/2.271 (iphone; iOS 9.3.2; Scale/3.00)')
19     response=urllib.request.urlopen(requ)
20     print(response)
21     html=response.read().decode('utf-8')
22     dictionary=json.loads(html)
23     data_arr=dictionary["data"]
24     for i in range(0,19):
25         name=data_arr[i]["nickname"]
26         img_url=data_arr[i]["vertical_src"]
27         print(type(img_url))
28         respon_tem=urllib.request.urlopen(img_url)
29         anchor_img=respon_tem.read()
30         with open('../photos/'+name+'.jpg','wb') as f:
31             f.write(anchor_img)

然后就等着吧~~

最好設置一下時間,每隔多久爬一次,或者每隔多久更換一次ip。就行了

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM