吃西瓜--爬蟲系列之Request使用方法


目錄

網絡爬蟲

常見反爬蟲機制:

常見響應狀態碼:

下面介紹兩個庫,想速成直接跳到Requests庫!不用看前面的!

urllib庫

urlopen函數:

urlretrieve函數:

urlencode函數:

parse_qs函數:

urlparse和urlsplit:

request.Request類:

ProxyHandler處理器(代理設置)

爬取需要登錄的網站

使用到cookie:

使用cookielib庫和HTTPCookieProcessor模擬登錄:

http.cookiejar模塊:

登錄人人網:

requests庫

發送GET請求:

發送POST請求:

使用代理: 

cookie:

session:

處理不信任的SSL證書:


 

網絡爬蟲

網絡爬蟲就是模擬用戶請求網絡的行為,可以自動請求網絡爬取數據,然后使用一定的規則提取有價值的信息。鑒於本科期間學過http、https、url、get、post等等知識,所以只需要學習Python相關的一些爬蟲知識!

常見反爬蟲機制:

get請求寫成post、post請求寫成get,需要確定請求方式;判斷User-Agent是否是瀏覽器,程序默認Python,爬蟲程序需要定制偽裝;Referer機制,表明這個請求是從哪個url過來的,如果是直接請求過來了的則認為是爬蟲,所以Referer一般指定為上一個頁面的url,進行偽裝。Cookie,如果是需要登錄的網址,則需要發送Cookie,否則拒絕服務!一般常見的反爬蟲機制就這些,后面遇見再補充。

常見響應狀態碼:

200:請求正常,但是如果被服務器發現是爬蟲,有些惡心的機制會返回假數據給你,所以狀態碼為200也不一定對;400:url找不到,即url錯誤;403:拒絕服務,即權限不夠;500:服務器內部錯誤;301:永久重定向;302:臨時重定向,例如需要登錄時則回到登錄頁面。其他遇見了自己百度即可!

下面介紹兩個庫,想速成直接跳到Requests庫!不用看前面的!



urllib庫

urllib庫是Python中一個最基本的網絡請求庫。可以模擬瀏覽器的行為,向指定的服務器發送一個請求,並可以保存服務器返回的數據。

urlopen函數:

Python3urllib庫中,所有和網絡請求相關的方法,都被集到urllib.request模塊下面了,以先來看下urlopen函數基本的使用:

from urllib import request
res=request.urlopen("http://www.baidu.com")
print(res.read())
print(res.getcode()) //返回狀態碼

以下對urlopen函數的進行詳細講解:

  1. url:請求的url。
  2. data:請求的data,如果設置了這個值,那么將變成post請求,否則默認get請求。
  3. 返回值:返回值是一個http.client.HTTPResponse對象,這個對象是一個類文件句柄對象。有read(size)readlinereadlines以及getcode等方法。

urlretrieve函數:

這個函數可以方便的將網頁上的一個文件(圖片、網頁、視頻、音頻等等)保存到本地。

以下代碼可以非常方便的將百度的首頁下載到本地:

from urllib import request
request.urlretrieve('http://www.baidu.com/','baidu.html')

urlencode函數:

用瀏覽器發送請求的時候,如果url中包含了中文或者其他特殊字符,那么瀏覽器會自動的給我們進行編碼。而如果使用代碼發送請求,那么就必須手動的進行編碼,這時候就應該使用urlencode函數來實現。urlencode可以把字典數據轉換為URL編碼的數據。示例代碼如下:

from urllib import parse
data = {'name':'陳奕迅','greet':'hello world','age':100}
qs = parse.urlencode(data)
print(qs)

parse_qs函數:

可以將經過編碼后的url參數進行解碼。示例代碼如下:

from urllib import parse
qs = "name=%E7%88%AC%E8%99%AB%E5%9F%BA%E7%A1%80&greet=hello+world&age=100"
print(parse.parse_qs(qs))
url="http://www.baidu.com/s"
params={'wd':"陳奕迅"}
qs=parse.urlencode(params)
url=url+'?'+qs
res=request.urlopen(url)
print(res.read())

urlparse和urlsplit:

有時候拿到一個url,想要對這個url中的各個組成部分進行分割,那么這時候就可以使用urlparse或者是urlsplit來進行分割,urlparseurlsplit基本上是一模一樣的。示例代碼如下:

from urllib import request,parse

url = 'http://www.baidu.com/s?zhiliao'

result = parse.urlsplit(url)
# result = parse.urlparse(url)

print('scheme:',result.scheme)
print('netloc:',result.netloc)
print('path:',result.path)
print('query:',result.query)

request.Request類:

如果想要在請求的時候增加一些請求頭,那么就必須使用request.Request類來實現。比如要增加一個User-Agent,示例代碼如下:

from urllib import request

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'
}
req = request.Request("http://www.baidu.com/",headers=headers)
resp = request.urlopen(req)
print(resp.read())

階段性實訓:爬取具有反爬機制的職位信息網站:

特別注意:這個網站必須要有User-Agent、Cookie、Referer三者缺一不可,否則會出現虛假提示“操作頻繁”!

from urllib import parse
from urllib import request


url='https://www.lagou.com/jobs/positionAjax.json?jd=%E6%9C%AA%E8%9E%8D%E8%B5%84&px=default&city=%E4%B8%8A%E6%B5%B7&needAddtionalResult=false'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
    'referer': 'https://www.lagou.com/jobs/list_PHP/p-city_3-jd_1?px=default',
     'cookie':'user_trace_token=20210406172445-1d25ab74-c910-4783-9b84-c928fc99bf4a; _ga=GA1.2.591359575.1617700950; LGUID=20210406172446-8ec58e7a-4dff-4697-b536-871deb0bf297; SL_GWPT_Show_Hide_tmp=1; SL_wptGlobTipTmp=1; JSESSIONID=ABAAABAABEIABCID008C43966AF564341B1AF34B99A82F1; WEBTJ-ID=2021046%E4%B8%8B%E5%8D%885:22:47172247-178a67dd5ae3e6-0c8a4c973f49b7-c3f3568-1327104-178a67dd5af888; RECOMMEND_TIP=true; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1617700950,1617700968; sajssdk_2015_cross_new_user=1; sensorsdata2015session=%7B%7D; _gid=GA1.2.390514112.1617700968; TG-TRACK-CODE=index_navigation; __lg_stoken__=196f3d3fd4ec6e4898a033a32cbdd87d119b155da22cd119dfcd31e3ce99a49d5ee2e6dd9b5ff60f09e4e5d93c5d2be3f4097f33dd63e66da6eb65227fac58dcba8fff84ad77; PRE_UTM=; PRE_HOST=; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2Fjobs%2Flist%5FPython%2Fp-city%5F3%3Fpx%3Ddefault%23filterBox; PRE_SITE=; LGSID=20210406193127-8e623399-a6ce-427b-820f-0ab1836625e4; index_location_city=%E6%9D%AD%E5%B7%9E; X_HTTP_TOKEN=472b1de01181b8697019077161c96a0362eecefe96; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22178a67dd761516-022460db3e080a-c3f3568-1327104-178a67dd7624da%22%2C%22first_id%22%3A%22%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2289.0.4389.114%22%7D%2C%22%24device_id%22%3A%22178a67dd761516-022460db3e080a-c3f3568-1327104-178a67dd7624da%22%7D; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1617708972; LGRID=20210406193827-4f487d75-b774-429c-8d05-10dbfd107e71; SEARCH_ID=ec90d17554414f888cf7d5483f660646'
}
data={
    'first': 'true',
    'pn':1,
    'kd':'PHP'
}
req=request.Request(url,headers=headers,data=parse.urlencode(data).encode('utf-8'),method='POST')
result=request.urlopen(req)
print(result.read().decode('utf-8'))

下面這段獲取頁面的爬蟲不知道為何被識別出來了,后面知道了再來看:

from urllib import parse
from urllib import request
url='https://www.lagou.com/jobs/list_php%E5%90%8E%E7%AB%AF?oquery=PHP&fromSearch=true&labelWords=relative&city=%E4%B8%8A%E6%B5%B7'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
     'referer': 'https://www.lagou.com/jobs/list_PHP/p-city_3-jd_1?px=default',
    'cookie':'user_trace_token=20210406172445-1d25ab74-c910-4783-9b84-c928fc99bf4a; _ga=GA1.2.591359575.1617700950; LGUID=20210406172446-8ec58e7a-4dff-4697-b536-871deb0bf297; SL_wptGlobTipTmp=1; SL_GWPT_Show_Hide_tmp=1; JSESSIONID=ABAAABAABEIABCID008C43966AF564341B1AF34B99A82F1; WEBTJ-ID=2021046%E4%B8%8B%E5%8D%885:22:47172247-178a67dd5ae3e6-0c8a4c973f49b7-c3f3568-1327104-178a67dd5af888; RECOMMEND_TIP=true; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1617700950,1617700968; sajssdk_2015_cross_new_user=1; sensorsdata2015session=%7B%7D; _gid=GA1.2.390514112.1617700968; __lg_stoken__=196f3d3fd4ec6e4898a033a32cbdd87d119b155da22cd119dfcd31e3ce99a49d5ee2e6dd9b5ff60f09e4e5d93c5d2be3f4097f33dd63e66da6eb65227fac58dcba8fff84ad77; index_location_city=%E6%9D%AD%E5%B7%9E; TG-TRACK-CODE=search_code; LGSID=20210406203850-9ca9bf81-661c-4697-a6bf-a9924a5dcc50; PRE_UTM=; PRE_HOST=; PRE_SITE=https%3A%2F%2Fwww.lagou.com%2Fjobs%2Flist%5FPHP%2Fp-city%5F3%3Fpx%3Ddefault; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2Fjobs%2Flist%5FPHP%2Fp-city%5F3-jd%5F1%3Fpx%3Ddefault%23filterBox; X_MIDDLE_TOKEN=9e3be039cb63614037dbc5868765704c; _gat=1; SEARCH_ID=e5b81ab0943343e9b7341b7fd3cbc424; X_HTTP_TOKEN=472b1de01181b8695334177161c96a0362eecefe96; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22178a67dd761516-022460db3e080a-c3f3568-1327104-178a67dd7624da%22%2C%22first_id%22%3A%22%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2289.0.4389.114%22%7D%2C%22%24device_id%22%3A%22178a67dd761516-022460db3e080a-c3f3568-1327104-178a67dd7624da%22%7D; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1617714200; LGRID=20210406210535-b85d51bf-1f5a-46b4-a58a-e0ca9543f6a1'
    }
req=request.Request(url,headers=headers)
result=request.urlopen(req)
print(result.read().decode('utf-8'))

ProxyHandler處理器(代理設置)

很多網站會檢測某一段時間某個IP的訪問次數(通過流量統計,系統日志等),如果訪問次數多的不像正常人,它會禁止這個IP的訪問。
所以我們可以設置一些代理服務器,每隔一段時間換一個代理,就算IP被禁止,依然可以換個IP繼續爬取。
urllib中通過ProxyHandler來設置使用代理服務器,下面代碼說明如何使用自定義opener來使用代理

from urllib import request

# 這個是沒有使用代理的
# resp = request.urlopen('http://httpbin.org/get')
# print(resp.read().decode("utf-8"))

# 這個是使用了代理的
handler = request.ProxyHandler({"http":"218.66.161.88:31769"})

opener = request.build_opener(handler)
req = request.Request("http://httpbin.org/ip")
resp = opener.open(req)
print(resp.read())

常用的代理有:

from urllib import parse
from urllib import request

url='https://2021.ip138.com/'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
     'referer': 'https://www.ip138.com/',
    'cookie':'Hm_lvt_ecdd6f3afaa488ece3938bcdbb89e8da=1610613275'
   }
#創建代理
handler=request.ProxyHandler({'http':'115.29.230.38:80'})
opener=request.build_opener(handler)
# req=request.Request(url,headers=headers)
#resp=opener.open(req)
resp=opener.open(url)
print(resp.read().decode('utf-8'))

# req=request.Request(url,headers=headers)
# result=request.urlopen(req)
# print(result.read().decode('utf-8'))

爬取需要登錄的網站

使用到cookie:

在網站中,http請求是無狀態的。也就是說即使第一次和服務器連接后並且登錄成功后,第二次請求服務器依然不能知道當前請求是哪個用戶。cookie的出現就是為了解決這個問題,第一次登錄后服務器返回一些數據(cookie)給瀏覽器,然后瀏覽器保存在本地,當該用戶發送第二次請求的時候,就會自動的把上次請求存儲的cookie數據自動的攜帶給服務器,服務器通過瀏覽器攜帶的數據就能判斷當前用戶是哪個了。cookie存儲的數據量有限,不同的瀏覽器有不同的存儲大小,但一般不超過4KB。因此使用cookie只能存儲一些小量的數據。

cookie的格式:

Set-Cookie: NAME=VALUE;Expires/Max-age=DATE;Path=PATH;Domain=DOMAIN_NAME;SECURE

參數意義:

  • NAME:cookie的名字。
  • VALUE:cookie的值。
  • Expires:cookie的過期時間。
  • Path:cookie作用的路徑。
  • Domain:cookie作用的域名。
  • SECURE:是否只在https協議下起作用。

使用cookielib庫和HTTPCookieProcessor模擬登錄:

Cookie 是指網站服務器為了辨別用戶身份和進行Session跟蹤,而儲存在用戶瀏覽器上的文本文件,Cookie可以保持登錄信息到用戶下次與服務器的會話。
這里以人人網為例。人人網中,要訪問某個人的主頁,必須先登錄才能訪問,登錄說白了就是要有cookie信息。那么如果我們想要用代碼的方式訪問,就必須要有正確的cookie信息才能訪問。解決方案有兩種,第一種是使用瀏覽器訪問,然后將cookie信息復制下來,放到headers中。示例代碼如下:

下面是使用Cookie爬取智聯招聘個人簡歷,首先登錄自己的簡歷界面再找Cookie,如果不使用cookie的話則返回一個登錄頁面!

from urllib import request
from urllib import parse

url='https://i.zhaopin.com/resume'
headers={
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36'
    ,'Referer': 'https://www.zhaopin.com/',
    'Cookie':'adfbid2=0; x-zp-client-id=2f4bfe08-df19-497e-8593-c152bd3f68ee; sts_deviceid=177c263d08d908-05a862904cc5cd-73e356b-1327104-177c263d08e6c4; _uab_collina=161387408561030908389537; ssxmod_itna2=iqmx0D9Qiti=T4BcDeTm+AGODBDUEhG+qOhd4xA6nh3D/BfDFrkGTUpRPApKOqOCBEr9t4RAiGyrGWlQjBr4zbx7QqUDjKD2QYD=; urlfrom2=121113803; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%221049870415%22%2C%22first_id%22%3A%22177b9dd977f6f-020a8969613f07-53e3566-1327104-177b9dd9780b3b%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E4%BB%98%E8%B4%B9%E5%B9%BF%E5%91%8A%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%99%BA%E8%81%94%E6%8B%9B%E8%81%98%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%2Fwww.baidu.com%2Fother.php%22%2C%22%24latest_utm_source%22%3A%22baidupcpz%22%2C%22%24latest_utm_medium%22%3A%22cpt%22%2C%22%24latest_utm_campaign%22%3A%22jl%22%2C%22%24latest_utm_content%22%3A%22tj%22%2C%22%24latest_utm_term%22%3A%2228757838%22%7D%2C%22%24device_id%22%3A%22177b9dd977f6f-020a8969613f07-53e3566-1327104-177b9dd9780b3b%22%7D; urlfrom=121113803; adfbid=0; sts_sg=1; sts_sid=178aaa2a42e6e6-07fc249754b7b5-c3f3568-1327104-178aaa2a42f537; sts_chnlsid=121113803; zp_src_url=https%3A%2F%2Fwww.baidu.com%2Fother.php%3Fsc.K60000K_AV0UxOsHA1WeDFNeMVr-UPsiBJtLez6Uq9_55OWQGkRJ0F1QOkT_glOmywiCeCCv6iCFeEWfZsrVyhIgnQ070ALwkiwPMlvtWM2fvBvMQ-Z9iVPKpntzTR8Jp2CPVJUhXGrvtprUTDxl4iUX2nPr-tNIiYxvgBYiNl8NDb_igy9xD7ovu1GscKVaUkYuJU9aCXpngPEXRZxDUGpQO1Vg.7Y_NR2Ar5Od669BCXgjRzeASFDZtwhUVHf632MRRt_Q_DNKnLeMX5Dkgbooo3eQr5gKPwmJCRnTxOoKKsTZK4TPHQ_U3bIt7jHzk8sHfGmEukmnTr59l32AM-YG8x6Y_f3lZgKfYt_QCJamJjArZZsqT7jHzs_lTUQqRHArZ5Xq-dKl-muCyrMWYv0.TLFWgv-b5HDkrfK1ThPGujYknHb0THY0IAYqd_xKJVgfko60IgP-T-qYXgK-5H00mywxIZ-suHY10ZIEThfqd_xKJVgfko60ThPv5HD0IgF_gv-b5HDdnWf4Pj6Ln1n0UgNxpyfqnHfzPHfYnWf0UNqGujYknjbsnjnLnsKVIZK_gv-b5HDznWT10ZKvgv-b5H00pywW5R9rf6KWThnqn10snWf%26ck%3D5127.1.105.460.154.445.166.270%26dt%3D1617770621%26wd%3D%25E6%2599%25BA%25E8%2581%2594%25E6%258B%259B%25E8%2581%2598%26tpl%3Dtpl_12273_24677_20875%26l%3D1524948733%26us%3DlinkName%253D%2525E6%2525A0%252587%2525E9%2525A2%252598-%2525E4%2525B8%2525BB%2525E6%2525A0%252587%2525E9%2525A2%252598%2526linkText%253D%2525E3%252580%252590%2525E6%252599%2525BA%2525E8%252581%252594%2525E6%25258B%25259B%2525E8%252581%252598%2525E3%252580%252591%2525E5%2525AE%252598%2525E6%252596%2525B9%2525E7%2525BD%252591%2525E7%2525AB%252599%252520%2525E2%252580%252593%252520%2525E5%2525A5%2525BD%2525E5%2525B7%2525A5%2525E4%2525BD%25259C%2525EF%2525BC%25258C%2525E4%2525B8%25258A%2525E6%252599%2525BA%2525E8%252581%252594%2525E6%25258B%25259B%2525E8%252581%252598%2525EF%2525BC%252581%2526linkType%253D; acw_tc=2760829b16177706809588871e6e5f19aefce52760874a088a02a911916ac0; ZP-ENV-FLAG=gray; ZP_OLD_FLAG=false; Hm_lvt_38ba284938d5eddca645bb5e02a02006=1617770545; SL_GWPT_Show_Hide_tmp=1; SL_wptGlobTipTmp=1; LastCity=%E6%9D%AD%E5%B7%9E; LastCity%5Fid=653; at=4e39069e2fd04bd1886d920b2eb88eed; rt=87d20a4d89ef410da5028fc43eeeb67e; ZPCITIESCLICKED=|653; ssxmod_itna=YqRx2DB7eQqrD7DzxA2Y=DkQGklKKK33G8Y00DBuGW4iNDnD8x7YDvmmEiIKGKYxxxvWxOGhewQEimhRfmdgmhr3YFrxB3DEx0=KqmGi4GGjxBYDQxAYDGDDPDocPD1D3qDkXxYPGW8qDbDiWkvxGCDeKD0xuFDQKDucKainIGkOA1tb5q48Gx6xG1i40Hmi8p3FSEinH3vx0k040OBOHkugYDUY2DNgLeZQFaqn2NeGDNqsiKiB0xWYhbK7x4/Ax3TCxe4D; ZL_REPORT_GLOBAL={%22//www%22:{%22seid%22:%224e39069e2fd04bd1886d920b2eb88eed%22%2C%22actionid%22:%225251163e-5979-4574-9670-222ac9a8a386-cityPage%22%2C%22funczone%22:%22city_hotjd%22}%2C%22jobs%22:{%22funczoneShare%22:%22dtl_best_for_you%22%2C%22recommandActionidShare%22:%221cae3ff7-762e-4708-b814-fc2d18caa64b-job%22}}; sts_evtseq=14; Hm_lpvt_38ba284938d5eddca645bb5e02a02006=1617771364'
}

req=request.Request(headers=headers,url=url)
rsp=request.urlopen(req)
print(rsp.read().decode('utf-8'))

with open('index.html','w',encoding='utf-8') as fp:
    fp.write(rsp.read().decode('utf-8'))

但是每次在訪問需要cookie的頁面都要從瀏覽器中復制cookie比較麻煩。在Python處理Cookie,一般是通過http.cookiejar模塊和urllib模塊的HTTPCookieProcessor處理器類一起使用。http.cookiejar模塊主要作用是提供用於存儲cookie的對象。而HTTPCookieProcessor處理器主要作用是處理這些cookie對象,並構建handler對象。

http.cookiejar模塊:

該模塊主要的類有CookieJar、FileCookieJar、MozillaCookieJar、LWPCookieJar。這四個類的作用分別如下:

  1. CookieJar:管理HTTP cookie值、存儲HTTP請求生成的cookie、向傳出的HTTP請求添加cookie的對象。整個cookie都存儲在內存中,對CookieJar實例進行垃圾回收后cookie也將丟失。
  2. FileCookieJar (filename,delayload=None,policy=None):從CookieJar派生而來,用來創建FileCookieJar實例,檢索cookie信息並將cookie存儲到文件中。filename是存儲cookie的文件名。delayload為True時支持延遲訪問訪問文件,即只有在需要時才讀取文件或在文件中存儲數據。
  3. MozillaCookieJar (filename,delayload=None,policy=None):從FileCookieJar派生而來,創建與Mozilla瀏覽器 cookies.txt兼容的FileCookieJar實例。
  4. LWPCookieJar (filename,delayload=None,policy=None):從FileCookieJar派生而來,創建與libwww-perl標准的 Set-Cookie3 文件格式兼容的FileCookieJar實例。

登錄人人網:

利用http.cookiejarrequest.HTTPCookieProcessor登錄人人網。相關示例代碼如下:

from urllib import request,parse
from http.cookiejar import CookieJar

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'
}

def get_opener():
    cookiejar = CookieJar()
    handler = request.HTTPCookieProcessor(cookiejar)
    opener = request.build_opener(handler)
    return opener

def login_renren(opener):
    data = {"email": "970138074@qq.com", "password": "pythonspider"}
    data = parse.urlencode(data).encode('utf-8')
    login_url = "http://www.renren.com/PLogin.do"
    req = request.Request(login_url, headers=headers, data=data)
    opener.open(req)

def visit_profile(opener):
    url = 'http://www.renren.com/880151247/profile'
    req = request.Request(url,headers=headers)
    resp = opener.open(req)
    with open('renren.html','w') as fp:
        fp.write(resp.read().decode("utf-8"))

if __name__ == '__main__':
    opener = get_opener()
    login_renren(opener)
    visit_profile(opener)

登錄智聯招聘發現其反扒機制很惡心,登錄居然用的是GET方法!然后,登錄除了賬號密碼之外還有其他的data!

from urllib import request
from urllib import parse
from http.cookiejar import CookieJar
# 1、登錄
# 1.1、創建一個cookiejar對象
cookiejar=CookieJar()
# 1.2、使用cookiejar創建一個HTTPCookieProcess對象
handle=request.HTTPCookieProcessor(cookiejar)
# 1.3、使用上一步創建的handler創建一個opener
opener=request.build_opener(handle)
# 1.4、使用opener發送登錄請求
headers={
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36'
    ,'Referer':'https://passport.zhaopin.com/login?bkUrl=%2F%2Fi.zhaopin.com%2Fblank%3Fhttps%3A%2F%2Fwww.zhaopin.com%2F'
    #,'Cookie':'adfbid2=0; x-zp-client-id=2f4bfe08-df19-497e-8593-c152bd3f68ee; sts_deviceid=177c263d08d908-05a862904cc5cd-73e356b-1327104-177c263d08e6c4; ssxmod_itna2=iqmx0D9Qiti=T4BcDeTm+AGODBDUEhG+qOhd4xA6nh3D/BfDFrkGTUpRPApKOqOCBEr9t4RAiGyrGWlQjBr4zbx7QqUDjKD2QYD=; urlfrom2=121113803; urlfrom=121113803; adfbid=0; sts_sg=1; sts_chnlsid=121113803; zp_src_url=https%3A%2F%2Fwww.baidu.com%2Fother.php%3Fsc.K60000K_AV0UxOsHA1WeDFNeMVr-UPsiBJtLez6Uq9_55OWQGkRJ0F1QOkT_glOmywiCeCCv6iCFeEWfZsrVyhIgnQ070ALwkiwPMlvtWM2fvBvMQ-Z9iVPKpntzTR8Jp2CPVJUhXGrvtprUTDxl4iUX2nPr-tNIiYxvgBYiNl8NDb_igy9xD7ovu1GscKVaUkYuJU9aCXpngPEXRZxDUGpQO1Vg.7Y_NR2Ar5Od669BCXgjRzeASFDZtwhUVHf632MRRt_Q_DNKnLeMX5Dkgbooo3eQr5gKPwmJCRnTxOoKKsTZK4TPHQ_U3bIt7jHzk8sHfGmEukmnTr59l32AM-YG8x6Y_f3lZgKfYt_QCJamJjArZZsqT7jHzs_lTUQqRHArZ5Xq-dKl-muCyrMWYv0.TLFWgv-b5HDkrfK1ThPGujYknHb0THY0IAYqd_xKJVgfko60IgP-T-qYXgK-5H00mywxIZ-suHY10ZIEThfqd_xKJVgfko60ThPv5HD0IgF_gv-b5HDdnWf4Pj6Ln1n0UgNxpyfqnHfzPHfYnWf0UNqGujYknjbsnjnLnsKVIZK_gv-b5HDznWT10ZKvgv-b5H00pywW5R9rf6KWThnqn10snWf%26ck%3D5127.1.105.460.154.445.166.270%26dt%3D1617770621%26wd%3D%25E6%2599%25BA%25E8%2581%2594%25E6%258B%259B%25E8%2581%2598%26tpl%3Dtpl_12273_24677_20875%26l%3D1524948733%26us%3DlinkName%253D%2525E6%2525A0%252587%2525E9%2525A2%252598-%2525E4%2525B8%2525BB%2525E6%2525A0%252587%2525E9%2525A2%252598%2526linkText%253D%2525E3%252580%252590%2525E6%252599%2525BA%2525E8%252581%252594%2525E6%25258B%25259B%2525E8%252581%252598%2525E3%252580%252591%2525E5%2525AE%252598%2525E6%252596%2525B9%2525E7%2525BD%252591%2525E7%2525AB%252599%252520%2525E2%252580%252593%252520%2525E5%2525A5%2525BD%2525E5%2525B7%2525A5%2525E4%2525BD%25259C%2525EF%2525BC%25258C%2525E4%2525B8%25258A%2525E6%252599%2525BA%2525E8%252581%252594%2525E6%25258B%25259B%2525E8%252581%252598%2525EF%2525BC%252581%2526linkType%253D; x_passport_sid=541d92b7se2e314fa8b1a827015f4a060547; ZP_OLD_FLAG=false; Hm_lvt_38ba284938d5eddca645bb5e02a02006=1617770545; LastCity=%E6%9D%AD%E5%B7%9E; LastCity%5Fid=653; ZPCITIESCLICKED=|653; ssxmod_itna=YqRx2DB7eQqrD7DzxA2Y=DkQGklKKK33G8Y00DBuGW4iNDnD8x7YDvmmEiIKGKYxxxvWxOGhewQEimhRfmdgmhr3YFrxB3DEx0=KqmGi4GGjxBYDQxAYDGDDPDocPD1D3qDkXxYPGW8qDbDiWkvxGCDeKD0xuFDQKDucKainIGkOA1tb5q48Gx6xG1i40Hmi8p3FSEinH3vx0k040OBOHkugYDUY2DNgLeZQFaqn2NeGDNqsiKiB0xWYhbK7x4/Ax3TCxe4D; ZL_REPORT_GLOBAL={%22//www%22:{%22seid%22:%224e39069e2fd04bd1886d920b2eb88eed%22%2C%22actionid%22:%225251163e-5979-4574-9670-222ac9a8a386-cityPage%22%2C%22funczone%22:%22city_hotjd%22}%2C%22jobs%22:{%22funczoneShare%22:%22dtl_best_for_you%22%2C%22recommandActionidShare%22:%221cae3ff7-762e-4708-b814-fc2d18caa64b-job%22}}; acw_tc=2760826e16177743233927099ef7353cfc99220ee9b0e1e5f11eb6671049f4; sts_sid=178aadb12ca86f-0f2166208f6b9a-c3f3568-1327104-178aadb12cb80f; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22177b9dd977f6f-020a8969613f07-53e3566-1327104-177b9dd9780b3b%22%2C%22first_id%22%3A%22%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E4%BB%98%E8%B4%B9%E5%B9%BF%E5%91%8A%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_utm_source%22%3A%22baidupcpz%22%2C%22%24latest_utm_medium%22%3A%22cpt%22%2C%22%24latest_utm_campaign%22%3A%22jl%22%2C%22%24latest_utm_content%22%3A%22tj%22%2C%22%24latest_utm_term%22%3A%2228757838%22%7D%2C%22%24device_id%22%3A%22177b9dd977f6f-020a8969613f07-53e3566-1327104-177b9dd9780b3b%22%7D; _uab_collina=161777469306311313996911; SL_GWPT_Show_Hide_tmp=1; SL_wptGlobTipTmp=1; 1420ba6bb40c9512e9642a1f8c243891=2720dea3-a6c1-4af6-ac23-cc817805a4d2; sts_evtseq=7; Hm_lpvt_38ba284938d5eddca645bb5e02a02006=1617774702; zp_passport_deepknow_sessionId=541d92b7se2e314fa8b1a827015f4a060547'
    }

url='https://passport.zhaopin.com/v4/account/login'
data={
    'passport':'1223',
    'password':'11122'
}
req=request.Request(url=url,headers=headers,data=parse.urlencode(data).encode('utf-8'),method='GET')
reqs=opener.open(req)
print(reqs.read().decode('utf-8'))

保存cookie到本地:

保存cookie到本地,可以使用cookiejarsave方法,並且需要指定一個文件名:

from urllib import request
from http.cookiejar import MozillaCookieJar

cookiejar = MozillaCookieJar("cookie.txt")
handler = request.HTTPCookieProcessor(cookiejar)
opener = request.build_opener(handler)

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'
}
req = request.Request('http://httpbin.org/cookies',headers=headers)

resp = opener.open(req)
print(resp.read())
cookiejar.save(ignore_discard=True,ignore_expires=True)

從本地加載cookie:

從本地加載cookie,需要使用cookiejarload方法,並且也需要指定方法:

from urllib import request
from http.cookiejar import MozillaCookieJar

cookiejar = MozillaCookieJar("cookie.txt")
cookiejar.load(ignore_expires=True,ignore_discard=True)
handler = request.HTTPCookieProcessor(cookiejar)
opener = request.build_opener(handler)

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'
}
req = request.Request('http://httpbin.org/cookies',headers=headers)

resp = opener.open(req)
print(resp.read())


requests庫

雖然Python的標准庫中 urllib模塊已經包含了平常我們使用的大多數功能,但是它的 API 使用起來讓人感覺不太好,而 Requests宣傳是 “HTTP for Humans”,說明使用更簡潔方便。

發送GET請求:

  1. 最簡單的發送get請求就是通過requests.get來調用:

    response = requests.get("http://www.baidu.com/")
    
  2. 添加headers和查詢參數:
    如果想添加 headers,可以傳入headers參數來增加請求頭中的headers信息。如果要將參數放在url中傳遞,可以利用 params 參數。相關示例代碼如下:

     import requests
    
     kw = {'wd':'中國'}
    
     headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"}
    
     # params 接收一個字典或者字符串的查詢參數,字典類型自動轉換為url編碼,不需要urlencode()
     response = requests.get("http://www.baidu.com/s", params = kw, headers = headers)
    
     # 查看響應內容,response.text 返回的是Unicode格式的數據
     print(response.text)
    
     # 查看響應內容,response.content返回的字節流數據
     print(response.content)
    
     # 查看完整url地址
     print(response.url)
    
     # 查看響應頭部字符編碼
     print(response.encoding)
    
     # 響應數據為json數據,可直接拿到字典數據
     response.json
    
     # 查看響應碼
     print(response.status_code)

發送POST請求:

  1. 最基本的POST請求可以使用post方法:

    response = requests.post("http://www.baidu.com/",data=data)
    
  2. 傳入data數據:
    這時候就不要再使用urlencode進行編碼了,直接傳入一個字典進去就可以了。比如請求拉勾網的數據的代碼:

     import requests
    
     url = "https://www.lagou.com/jobs/positionAjax.json?city=%E6%B7%B1%E5%9C%B3&needAddtionalResult=false&isSchoolJob=0"
    
     headers = {
         'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36',
         'Referer': 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput='
     }
    
     data = {
         'first': 'true',
         'pn': 1,
         'kd': 'python'
     }
    
     resp = requests.post(url,headers=headers,data=data)
     # 如果是json數據,直接可以調用json方法
     print(resp.json())
    

使用代理: 

使用requests添加代理也非常簡單,只要在請求的方法中(比如get或者post)傳遞proxies參數就可以了。示例代碼如下:

import requests

url = "http://httpbin.org/get"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36',
}

proxy = {
    'http': '171.14.209.180:27829'
}

resp = requests.get(url,headers=headers,proxies=proxy)
with open('xx.html','w',encoding='utf-8') as fp:
    fp.write(resp.text)

cookie:

如果在一個響應中包含了cookie,那么可以利用cookies屬性拿到這個返回的cookie值:

import requests

url = "http://www.renren.com/PLogin.do"
data = {"email":"970138074@qq.com",'password':"pythonspider"}
resp = requests.get('http://www.baidu.com/')
print(resp.cookies)
# 以字典形式返回
print(resp.cookies.get_dict())

session:

之前使用urllib庫,是可以使用opener發送多個請求,多個請求之間是可以共享cookie的。那么如果使用requests,也要達到共享cookie的目的,那么可以使用requests庫給我們提供的session對象。注意,這里的session不是web開發中的那個session,這個地方只是一個會話的對象而已。還是以登錄人人網為例,使用requests來實現。示例代碼如下:

import requests

url = "http://www.renren.com/PLogin.do"
data = {"email":"970138074@qq.com",'password':"pythonspider"}
headers = {
    'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36"
}

# 登錄
session = requests.session()
session.post(url,data=data,headers=headers)

# 訪問大鵬個人中心
resp = session.get('http://www.renren.com/880151247/profile')

print(resp.text)

處理不信任的SSL證書:

對於那些已經被信任的SSL整數的網站,比如https://www.baidu.com/,那么使用requests直接就可以正常的返回響應。示例代碼如下:

resp = requests.get('http://www.12306.cn/mormhweb/',verify=False)
print(resp.content.decode('utf-8'))

注:以上學習內容本人整理於網易課堂自費購買課程,目的就是做個筆記防止自己忘記,聯系侵刪!!!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM