這個作業屬於哪個課程 | https://edu.cnblogs.com/campus/fzzcxy/ZhichengSoftengineeringPracticeFclass |
---|---|
這個作業要求在哪里 | https://edu.cnblogs.com/campus/fzzcxy/ZhichengSoftengineeringPracticeFclass/homework/12532 |
這個作業的目標 | 學會使用fiddler工具、git的使用、以及python下requests包的使用 |
Gitee 地址 | https://gitee.com/leiwjie/lwj212106766/tree/master/demo1 |
一、使用 fiddler 抓包工具+代碼,實時監控朴朴上某產品的詳細價格信息
(1)解題思路
-
1、安裝Fiddler
-
3、通過各種嘗試后決定用mac端微信打開朴朴小程序
-
4、啟動Fiddler對朴朴商品進行抓包
-
5、解析包數據拿到想要的json內容
- 6、找到目標地址使用python爬蟲進行抓取和數據清洗
(2)設計實現過程
- 1.嘗試使用目標地址進行訪問連接失敗
- 2.去csdn尋找原因最后添加了請求頭user-agent連接成功
- 3.使用requests包請求數據,將放回的json數據轉為字典方便數據提取
- 4.使用time.sleep隨機每1分鍾抓取一次
- 5.進行數據展示處理
- 6.將請求json數據代碼快和延時執行抓取價格分別寫進函數
#請求網頁
def t1():
#發送請求
response_1=requests.get(url, headers=headers)
#設置編碼
response_1.encoding='utf-8'
#獲取內容
c=response_1.text
#轉換成字典
dict=json.loads(c)
data=dict.get('data')
#商品名
name=data.get('name')
#商品價格
price=int_to_float(data.get('price'))
#規格
spec=data.get('spec')
#原價
market_price=int_to_float(data.get('market_price'))
#詳情內容
share_content=data.get('share_content')
#標題
sub_title=data.get('sub_title')
print('---------------------------------------------商品: '+name+'------------------------')
print('規格:'+spec)
print('價格:'+ str(price))
print('原價/折扣:'+str(market_price)+'/'+str(price))
print('詳情內容:'+share_content)
print()
print('-----------------------------------------------商品: "'+name+'"的價格波動------------------------')
def t2():
#延時執行
while(1):
t=random.randint(60,300)
print('距離下一次抓取'+str(t)+'秒')
time.sleep(t)
#發送請求
response_1=requests.get(url, headers=headers)
#設置編碼
response_1.encoding='utf-8'
#獲取內容
c=response_1.text
#轉換成字典
dict=json.loads(c)
data=dict.get('data')
#商品價格
price=int_to_float(data.get('price'))
#輸出當前價格
print('當前時間為'+time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())+', 價格為'+str(price))
-
7.考慮到會被判定為機器人將間隔時間改為1~5分鍾隨機抓取
-
8.gitee推送
(3)代碼改進
- 1 一開始使用re正則表達式發現繁瑣並且抓取的數據有遺漏后來改用json轉字典的方法更加方便准確
- 2 自定義隨機延時函數來避免被判定為機器人
二、知乎收藏夾的爬取
(1)解題思路
- 1、經過朴朴爬蟲的實戰讓我找到更加熟悉抓取流程,先抓到收藏首頁地址經過清洗拿到每個收藏夾的地址
- 2、然后再逐個抓取每個收藏夾下的json數據解析出子目錄以及地址
- 3、最后經過處理更美觀的展示
(2)設計實現過程
- 1、用Fiddler抓取目標地址
- 2、還是先拿到請求頭並且加入了cookies
- 3、設計了請求函數
#發送請求模塊
def req(url_):
res = requests.get(url=url_, cookies=data, headers=headers)
res.encoding='utf-8'
c=res.content
soup=BeautifulSoup(c,'lxml')
return soup
- 4、抓取收藏夾模塊
#抓取收藏夾
def collect():
collect=req(url).find_all(attrs={'class': 'SelfCollectionItem-title'})
dict_collect={}
for c in collect:
#收藏夾地址id
pattern1=re.compile(r'\d+')
result=pattern1.findall(str(c))
# collectionUrl='https://www.zhihu.com/collection/'+result[0]
collectionUrl='https://www.zhihu.com/api/v4/collections/'+result[0]+'/items?'
#收藏夾名
pattern2=re.compile(r'>.*<')
result2=pattern2.findall(str(c))
strr= result2[0]
lenth=len(strr)
collectName=strr[1:lenth-1]
dict_collect[collectName]=[collectionUrl]
return dict_collect
- 5、抓取收藏夾子目錄模塊
#抓取收藏夾子目錄
def spider_colect(dict):
dict_collect_listt={}
for d in dict :
url1=dict[d][0]
print('-------------------------------name--》'+d+'正在抓取'+'---收藏夾地址:'+url1)
reqst=req(url1).text
js=json.loads(reqst)
result=js['data']
for r in result:
try:
print('=》標題'+r['content']['question']['title'])
print(r['content']['url'])
dict_collect_listt[r['content']['question']['title']]=[r['content']['url']]
except:
print('=》標題'+r['content']['title'])
print(r['content']['url'])
dict_collect_listt[r['content']['title']]=[r['content']['url']]
return dict_collect_listt
-
6、gitee推送
-
7、運行結果展示
(3)代碼優化
-
1、本次實驗話費最多的時間花費在json數據的解析上查找資料並且學習了python的正則表達式上。
https://blog.csdn.net/weixin_46737755/article/details/113426735?utm_source=app&app_version=5.1.1(正則參考資料)但是其實發現還是用json轉字典更容易解決於是又遇到了字典內的titie鍵有的在外層有的在嵌套字典里於是又重洗查找python字典的詳細用法,最后采用了分層拆解的辦法得到title的value。最終思路用try except else,代碼將在try嘗試取值如果出錯就進入else進行拆解取值成功。
三、使用 fiddler 抓包工具+代碼,抓取拉勾網崗位信息
(1)解題思路
- 1、利用fiddler抓包發現找不到想要的json數據
- 2、改變思路直接抓取html頁面
- 3、通過對比發現網址上帶有一些信息如城市、崗位名稱、頁碼等,讓后利用這個規則構造請求鏈接
url='https://www.lagou.com/wn/jobs?&gm='+peple_num+'%E4%BA%BA&kd='+job_name+'&city='+city+'&pn='+page
- 4、將抓取的html頁面進行正則分每一塊都包含一條招聘信息的完整記錄
- 5、進行第二輪清洗將數據存入字典
- 6、把字典里的數據迭代寫入xls表格
在這里接觸到了一個新的python包xlwt(python表格操作包)於是參考了以下鏈接學習xlwt的基本使用
https://blog.csdn.net/Tulaimes/article/details/71172778?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522164744988216780255285169%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=164744988216780255285169&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2
(2)設計實現過程
- 1、構造請求頭和cookies
head={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.74 Safari/537.36'
}
data={'user_trace_token':'20220316184137-7e2191a5-bdcd-4d99-9d9e-593a312df9f9',
' Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6':'1647427299',
' _ga':'GA1.2.314814173.1647427299',
' LGSID':'20220316184139-482d4935-1bdb-457f-913d-204ebf655b27',
' PRE_UTM':'m_cf_cpt_baidu_pcbt',
' PRE_HOST':'www.baidu.com',
' PRE_SITE':'https%3A%2F%2Fwww.baidu.com%2Fother.php%3Fsc.K00000jKkk4BUEVK7Jnma2u%5F4LTL8IEzeQEQtJbLWN1r-x4hD9n1bNIVC-vkTG-rptNe2a4dmBnbGfnMG22Hmn94tQWJelSAuO83NNpORsZADblTEwwoh77V-kRTHbY0pbEVmNMasfzbHhyJYGnnV26R7mrqqSmu8zSmfuvqz2uWcUTp-GZI8j2OBR3mtiIn9pcTQPxVpFEO98vsAz3KQ%5FWZu8xV.7Y%5FNR2Ar5Od663rj6tJQrGvKD77h24SU5WudF6ksswGuh9J4qt7jHzk8sHfGmYt%5FrE-9kYryqM764TTPqKi%5FnYQZHuukL0.TLFWgv-b5HDkrfK1ThPGujYknHb0THY0IAYqs2v4VnL30ZN1ugFxIZ-suHYs0A7bgLw4TARqnsKLULFb5TaV8UHPS0KzmLmqnfKdThkxpyfqnHR1n1mYPjc3r0KVINqGujYkPjmzPWnknfKVgv-b5HDkn1c1nj6d0AdYTAkxpyfqnHczP1n0TZuxpyfqn0KGuAnqiD4a0ZKGujYd0APGujY3nfKWThnqPHm3%26ck%3D2743.1.116.297.155.276.150.402%26dt%3D1647427292%26wd%3D%25E6%258B%2589%25E5%258B%25BE%25E7%25BD%2591%26tpl%3Dtpl%5F12273%5F25897%5F22126%26l%3D1533644288%26us%3DlinkName%253D%2525E6%2525A0%252587%2525E9%2525A2%252598-%2525E4%2525B8%2525BB%2525E6%2525A0%252587%2525E9%2525A2%252598%2526linkText%253D%2525E3%252580%252590%2525E6%25258B%252589%2525E5%25258B%2525BE%2525E6%25258B%25259B%2525E8%252581%252598%2525E3%252580%252591%2525E5%2525AE%252598%2525E6%252596%2525B9%2525E7%2525BD%252591%2525E7%2525AB%252599%252520-%252520%2525E4%2525BA%252592%2525E8%252581%252594%2525E7%2525BD%252591%2525E9%2525AB%252598%2525E8%252596%2525AA%2525E5%2525A5%2525BD%2525E5%2525B7%2525A5%2525E4%2525BD%25259C%2525EF%2525BC%25258C%2525E4%2525B8%25258A%2525E6%25258B%252589%2525E5%25258B%2525BE%21%2526linkType%253D; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2Flanding-page%2Fpc%2Fsearch.html%3Futm%5Fsource%3Dm%5Fcf%5Fcpt%5Fbaidu%5Fpcbt; LGUID=20220316184139-e6e5080c-e420-4df8-92f4-942d3b9b946c; gate_login_token=58774caede3acc16b832c73b85d4e05d24365577f5526d4008c2bb9412f5bada; LG_HAS_LOGIN=1; _putrc=56B90E54AF51838F123F89F2B170EADC; JSESSIONID=ABAAAECABIEACCAA67549DE5F161F2B8C75812BF3C79350; login=true; hasDeliver=0; privacyPolicyPopup=false; WEBTJ-ID=20220316184221-17f92524d2037c-054e030a41bb48-133a645d-1024000-17f92524d21636; sajssdk_2015_cross_new_user=1; sensorsdata2015session=%7B%7D; unick=%E9%9B%B7%E6%96%87%E5%80%9F; RECOMMEND_TIP=true; X_HTTP_TOKEN=6b4be9b874e5d4336057247461903bbf6b0a117cf0; _gid=GA1.2.485582894.1647427507; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1647427507; __SAFETY_CLOSE_TIME__24085628=1; TG-TRACK-CODE=index_navigation; LGRID=20220316184511-d4561436-ee26-4eec-95c0-83119d3af5ef; __lg_stoken__=ccc62399cc29a776c7d7a874bb4350736bdc3d6c8e3409692978f5ae85315148b7725b924ec6c44bdac2b6aa84fcdab13be9ab320ec4eff7b498313e6cc527f6fb43a7a73fdb; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2224085628%22%2C%22first_id%22%3A%2217f92524e0f6ea-000aca07f2b8ba-133a645d-1024000-17f92524e10c6d%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22MacOS%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2299.0.4844.74%22%7D%2C%22%24device_id%22%3A%2217f92524e0f6ea-000aca07f2b8ba-133a645d-1024000-17f92524e10c6d%22%7D'
}
- 2、請求數據模塊
#發送請求
def post_value(city,job_name,peple_num,page):
url='https://www.lagou.com/wn/jobs?&gm='+peple_num+'%E4%BA%BA&kd='+job_name+'&city='+city+'&pn='+page
req=requests.post(url,cookies=data,headers=head)
req.encoding='utf-8'
c=req.content
soup=BeautifulSoup(c,'lxml')
return soup
- 3、數據清洗模塊
#清洗數據
def data_clear(city,job,num,page):
dict={}
for i in range(1,page+1):
data=post_value(city,job,num,str(i))
print('=========》第'+str(i)+'頁正在抓取')
regex='排序方式.*推薦公司'
regex2='p-top__1F7CL.*?p-top__1F7CL'
pattern=re.compile(regex)
result=pattern.findall(str(data))
pattern=re.compile(regex2)
result=pattern.findall(str(result))
for l in result:
li=[]
regex_name_city_company='<a>.*?</a>'
regex_money='money__3Lkgq.*?</div>'
regex='>.*?<'
pattern=re.compile(regex_name_city_company)
resul_name_city_company=pattern.findall(str(l))
# print(resul_name_city_company)
pattern2=re.compile(regex_money)
resul_money=pattern2.findall(str(l))
#價格
money=str(resul_money)[16:23]
regex_money2='\d.*-\d.*k'
pattern3=re.compile(regex_money2)
money=pattern3.findall(money)[0]
# print(money)
#職位\區域\公司名稱
pattern=re.compile(regex)
resul=pattern.findall(str(resul_name_city_company))
for k in range(5):
if(k==0):
li.append(resul[0][1:len(str(resul[0]))-1])
elif (k==1):
li.append(resul[1][1:len(str(resul[1]))-1])
elif (k==3):
li.append(resul[3][1:len(str(resul[3]))-1])
li.append(city)
li.append(job)
li.append(num)
dict[money]=li
print(str(i)+'頁抓取完成')
time.sleep(random.randint(5,10))
return dict
- 4、xls寫入模塊
def io_xls(dict: object):
# 創建一個workbook對象
book = xlwt.Workbook(encoding='utf-8',style_compression=0)
# 創建一個sheet對象,相當於創建一個sheet頁
sheet = book.add_sheet('test_sheet',cell_overwrite_ok=True)
d=0
sheet.write(0,0,'城市')
sheet.write(0,1,'崗位類型')
sheet.write(0,2,'公司人數')
sheet.write(0,3,'公司名稱')
sheet.write(0,4,'公司招收崗位')
sheet.write(0,5,'薪資范疇')
sheet.write(0,6,'城區')
for a in dict:
sheet.write(d+1,1,dict[a][4])
sheet.write(d+1,0,dict[a][3])
sheet.write(d+1,2,dict[a][5])
sheet.write(d+1,3,dict[a][2])
sheet.write(d+1,4,dict[a][0])
sheet.write(d+1,5,str(a))
sheet.write(d+1,6,dict[a][1])
d=d+1
book.save('data.xls')
- 4、git推送
- 5、運行演示抓取后的數據在當前文件夾下生成xls表格保存
(3)代碼優化
- 1、請求的url由手動復制改成自動生成
- 2、正則的提取總是經常匹配多余內容,后來添加了非貪婪匹配