python中requests庫使用方法詳解
官方文檔
requests的具體安裝過程請看:http://docs.python-requests.org/en/latest/user/install.html#install
requests的官方指南文檔:http://docs.python-requests.org/en/latest/user/quickstart.html
requests的高級指南文檔:http://docs.python-requests.org/en/latest/user/advanced.html#advanced
什么是Requests
Requests 是⽤ython語⾔編寫,基於urllib,采⽤Apache2 Licensed開源協議的 HTTP 庫。它⽐ urllib 更加⽅便,
可以節約我們⼤量的⼯作,完全滿⾜HTTP測試需求。
⼀句話——Python實現的簡單易⽤的HTTP庫
安裝Requests庫
進入命令行win+R執行
命令:pip install requests
項目導入:import requests
各種請求方式
直接上代碼,不明白可以查看我的urllib的基本使用方法
import requests
requests.post('http://httpbin.org/post')
requests.put('http://httpbin.org/put')
requests.delete('http://httpbin.org/delete')
requests.head('http://httpbin.org/get')
requests.options('http://httpbin.org/get')
這么多請求方式,都有什么含義,所以問下度娘:
GET: 請求指定的頁面信息,並返回實體主體。
HEAD: 只請求頁面的首部。
POST: 請求服務器接受所指定的文檔作為對所標識的URI的新的從屬實體。
PUT: 從客戶端向服務器傳送的數據取代指定的文檔的內容。
DELETE: 請求服務器刪除指定的頁面。
get 和 post比較常見 GET請求將提交的數據放置在HTTP請求協議頭中
POST提交的數據則放在實體數據中
基本的GET請求
import requests
response = requests.get('http://httpbin.org/get')
print(response.text)
返回值:
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"origin": "183.64.61.29",
"url": "http://httpbin.org/get"
}
(4)、獲取二進制數據
記住返回值.content就ok了
import requests
response = requests.get("https://github.com/favicon.ico")
print(type(response.text), type(response.content))
print(response.text)
print(response.content)
帶參數的GET請求
將name和age傳進去
import requests
response = requests.get("http://httpbin.org/get?name=germey&age=22")
print(response.text)
{
"args": {
"age": "22",
"name": "germey"
},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"origin": "183.64.61.29",
"url": "http://httpbin.org/get?name=germey&age=22"
}
或者使用params的方法:
import requests
data = {
'name': 'germey',
'age': 22
}
response = requests.get("http://httpbin.org/get", params=data)
print(response.text)
返回值一樣
返回值為二進制不必再進行展示,
解析json
將返回值已json的形式展示:
import requests
import json
response = requests.get("http://httpbin.org/get")
print(type(response.text))
print(response.json())
print(json.loads(response.text))
print(type(response.json()))
返回值:
<class 'str'>
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.18.4'}, 'origin': '183.64.61.29', 'url': 'http://httpbin.org/get'}
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.18.4'}, 'origin': '183.64.61.29', 'url': 'http://httpbin.org/get'}
<class 'dict'>
添加headers
有些網站訪問時必須帶有瀏覽器等信息,如果不傳入headers就會報錯,如下
import requests
response = requests.get("https://www.zhihu.com/explore")
print(response.text)
返回值:
<html><body><h1>500 Server Error</h1>
An internal server error occured.
</body></html>
當傳入headers時:
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
response = requests.get("https://www.zhihu.com/explore", headers=headers)
print(response.text)
成功返回網頁源代碼不做展示
基本POST請求
不明白見urllib的使用方法
import requests
data = {'name': 'germey', 'age': '22'}
response = requests.post("http://httpbin.org/post", data=data)
print(response.text)
返回:
{
"args": {},
"data": "",
"files": {},
"form": {
"age": "22",
"name": "germey"
},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Content-Length": "18",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"json": null,
"origin": "183.64.61.29",
"url": "http://httpbin.org/post"
}
響應
response屬性
import requests
response = requests.get('http://www.jianshu.com')
print(type(response.status_code), response.status_code)
print(type(response.headers), response.headers)
print(type(response.cookies), response.cookies)
print(type(response.url), response.url)
print(type(response.history), response.history)
return:
<class 'int'> 200
<class 'requests.structures.CaseInsensitiveDict'> {'Date': 'Thu, 01 Feb 2018 20:47:08 GMT', 'Server': 'Tengine', 'Content-Type': 'text/html; charset=utf-8', 'Transfer-Encoding': 'chunked', 'X-Frame-Options': 'DENY', 'X-XSS-Protection': '1; mode=block', 'X-Content-Type-Options': 'nosniff', 'ETag': 'W/"9f70e869e7cce214b6e9d90f4ceaa53d"', 'Cache-Control': 'max-age=0, private, must-revalidate', 'Set-Cookie': 'locale=zh-CN; path=/', 'X-Request-Id': '366f4cba-8414-4841-bfe2-792aeb8cf302', 'X-Runtime': '0.008350', 'Content-Encoding': 'gzip', 'X-Via': '1.1 gjf22:8 (Cdn Cache Server V2.0), 1.1 PSzqstdx2ps251:10 (Cdn Cache Server V2.0)', 'Connection': 'keep-alive'}
<class 'requests.cookies.RequestsCookieJar'> <RequestsCookieJar[<Cookie locale=zh-CN for www.jianshu.com/>]>
<class 'str'> https://www.jianshu.com/
<class 'list'> [<Response [301]>]
狀態碼判斷:常見的網頁狀態碼:
100: ('continue',),
101: ('switching_protocols',),
102: ('processing',),
103: ('checkpoint',),
122: ('uri_too_long', 'request_uri_too_long'),
200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '✓'),
201: ('created',),
202: ('accepted',),
203: ('non_authoritative_info', 'non_authoritative_information'),
204: ('no_content',),
205: ('reset_content', 'reset'),
206: ('partial_content', 'partial'),
207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
208: ('already_reported',),
226: ('im_used',),
# Redirection.
300: ('multiple_choices',),
301: ('moved_permanently', 'moved', '\\o-'),
302: ('found',),
303: ('see_other', 'other'),
304: ('not_modified',),
305: ('use_proxy',),
306: ('switch_proxy',),
307: ('temporary_redirect', 'temporary_moved', 'temporary'),
308: ('permanent_redirect','resume_incomplete', 'resume',), # These 2 to be removed in 3.0
# Client Error.
400: ('bad_request', 'bad'),
401: ('unauthorized',),
402: ('payment_required', 'payment'),
403: ('forbidden',),
404: ('not_found', '-o-'),
405: ('method_not_allowed', 'not_allowed'),
406: ('not_acceptable',),
407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
408: ('request_timeout', 'timeout'),
409: ('conflict',),
410: ('gone',),
411: ('length_required',),
412: ('precondition_failed', 'precondition'),
413: ('request_entity_too_large',),
414: ('request_uri_too_large',),
415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
417: ('expectation_failed',),
418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
421: ('misdirected_request',),
422: ('unprocessable_entity', 'unprocessable'),
423: ('locked',),
424: ('failed_dependency', 'dependency'),
425: ('unordered_collection', 'unordered'),
426: ('upgrade_required', 'upgrade'),
428: ('precondition_required', 'precondition'),
429: ('too_many_requests', 'too_many'),
431: ('header_fields_too_large', 'fields_too_large'),
444: ('no_response', 'none'),
449: ('retry_with', 'retry'),
450: ('blocked_by_windows_parental_controls', 'parental_controls'),
451: ('unavailable_for_legal_reasons', 'legal_reasons'),
499: ('client_closed_request',),
# Server Error.
500: ('internal_server_error', 'server_error', '/o\\', '✗'),
501: ('not_implemented',),
502: ('bad_gateway',),
503: ('service_unavailable', 'unavailable'),
504: ('gateway_timeout',),
505: ('http_version_not_supported', 'http_version'),
506: ('variant_also_negotiates',),
507: ('insufficient_storage',),
509: ('bandwidth_limit_exceeded', 'bandwidth'),
510: ('not_extended',),
511: ('network_authentication_required', 'network_auth', 'network_authentication'),
高級操作
文件上傳
使用 Requests 模塊,上傳文件也是如此簡單的,文件的類型會自動進行處理:
實例:
import requests
files = {'file': open('cookie.txt', 'rb')}
response = requests.post("http://httpbin.org/post", files=files)
print(response.text)
這是通過測試網站做的一個測試,返回值如下:
{
"args": {},
"data": "",
"files": {
"file": "#LWP-Cookies-2.0\r\nSet-Cookie3: BAIDUID=\"D2B4E137DE67E271D87F03A8A15DC459:FG=1\"; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2086-02-13 11:15:12Z\"; version=0\r\nSet-Cookie3: BIDUPSID=D2B4E137DE67E271D87F03A8A15DC459; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2086-02-13 11:15:12Z\"; version=0\r\nSet-Cookie3: H_PS_PSSID=25641_1465_21087_17001_22159; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; discard; version=0\r\nSet-Cookie3: PSTM=1516953672; path=\"/\"; domain=\".baidu.com\"; path_spec; domain_dot; expires=\"2086-02-13 11:15:12Z\"; version=0\r\nSet-Cookie3: BDSVRTM=0; path=\"/\"; domain=\"www.baidu.com\"; path_spec; discard; version=0\r\nSet-Cookie3: BD_HOME=0; path=\"/\"; domain=\"www.baidu.com\"; path_spec; discard; version=0\r\n"
},
"form": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Content-Length": "909",
"Content-Type": "multipart/form-data; boundary=84835f570cfa44da8f4a062b097cad49",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"json": null,
"origin": "183.64.61.29",
"url": "http://httpbin.org/post"
}
獲取cookie
當需要cookie時,直接調用response.cookieresponse為請求后的返回值)
import requests
response = requests.get("https://www.baidu.com")
print(response.cookies)
for key, value in response.cookies.items():
print(key + '=' + value)
輸出結果:
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
BDORZ=27315
會話維持、模擬登陸
如果某個響應中包含一些Cookie,你可以快速訪問它們:
import requests
r = requests.get('http://www.google.com.hk/')
print(r.cookies['NID'])
print(tuple(r.cookies))
要想發送你的cookies到服務器,可以使用 cookies 參數:
import requests
url = 'http://httpbin.org/cookies'
cookies = {'testCookies_1': 'Hello_Python3', 'testCookies_2': 'Hello_Requests'}
# 在Cookie Version 0中規定空格、方括號、圓括號、等於號、逗號、雙引號、斜杠、問號、@,冒號,分號等
特殊符號都不能作為Cookie的內容。
r = requests.get(url, cookies=cookies)
print(r.json())
證書驗證(SSL Cert Verification)
#證書驗證(大部分網站都是https)
import requests
respone=requests.get('https://www.12306.cn') #如果是ssl請求,首先檢查證書是否合法,不合法則報錯,程序終端
#改進1:去掉報錯,但是會報警告
import requests
respone=requests.get('https://www.12306.cn',verify=False) #不驗證證書,報警告,返回200
print(respone.status_code)
#改進2:去掉報錯,並且去掉警報信息
import requests
from requests.packages import urllib3
urllib3.disable_warnings() #關閉警告
respone=requests.get('https://www.12306.cn',verify=False)
print(respone.status_code)
#改進3:加上證書
#很多網站都是https,但是不用證書也可以訪問,大多數情況都是可以攜帶也可以不攜帶證書
#知乎\百度等都是可帶可不帶
#有硬性要求的,則必須帶,比如對於定向的用戶,拿到證書后才有權限訪問某個特定網站
import requests
respone=requests.get('https://www.12306.cn',
cert=('/path/server.crt',
'/path/key'))
print(respone.status_code)
認證設置
#官網鏈接:http://docs.python-requests.org/en/master/user/authentication/
#認證設置:登陸網站是,彈出一個框,要求你輸入用戶名密碼(與alter很類似),此時是無法獲取html的
# 但本質原理是拼接成請求頭發送
# r.headers['Authorization'] = _basic_auth_str(self.username, self.password)
# 一般的網站都不用默認的加密方式,都是自己寫
# 那么我們就需要按照網站的加密方式,自己寫一個類似於_basic_auth_str的方法
# 得到加密字符串后添加到請求頭
# r.headers['Authorization'] =func('.....')
#看一看默認的加密方式吧,通常網站都不會用默認的加密設置
import requests
from requests.auth import HTTPBasicAuth
r=requests.get('xxx',auth=HTTPBasicAuth('user','password'))
print(r.status_code)
#HTTPBasicAuth可以簡寫為如下格式
import requests
r=requests.get('xxx',auth=('user','password'))
print(r.status_code)
代理設置
在進行爬蟲爬取時,有時候爬蟲會被服務器給屏蔽掉,這時采用的方法主要有降低訪問時間,通過代理ip訪問,
如下:
import requests
proxies = {
"http": "http://127.0.0.1:9743",
"https": "https://127.0.0.1:9743",
}
response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)
ip可以從網上抓取,或者某寶購買
如果代理需要設置賬戶名和密碼,只需要將字典更改為如下:
proxies = {
"http":"http://user:password@127.0.0.1:9999"
}
如果你的代理是通過sokces這種方式則需要
pip install "requests[socks]"
proxies= {
"http":"socks5://127.0.0.1:9999",
"https":"sockes5://127.0.0.1:8888"
}
超時設置
訪問有些網站時可能會超時,這時設置好timeout就可以解決這個問題
import requests
from requests.exceptions import ReadTimeout
try:
response = requests.get("http://httpbin.org/get", timeout = 0.5)
print(response.status_code)
except ReadTimeout:
print('Timeout')
正常訪問,狀態嗎返回200
認證設置
如果碰到需要認證的網站可以通過requests.auth模塊實現
import requests
from requests.auth import HTTPBasicAuth
response = requests.get("http://120.27.34.24:9001/",auth=HTTPBasicAuth("user","123"))
print(response.status_code)
當然這里還有一種方式
import requests
response = requests.get("http://120.27.34.24:9001/",auth=("user","123"))
print(response.status_code)
異常處理
遇到網絡問題(如:DNS查詢失敗、拒絕連接等)時,Requests會拋出一個ConnectionError 異常。
遇到罕見的無效HTTP響應時,Requests則會拋出一個 HTTPError 異常。
若請求超時,則拋出一個 Timeout 異常。
若請求超過了設定的最大重定向次數,則會拋出一個 TooManyRedirects 異常。
所有Requests顯式拋出的異常都繼承自 requests.exceptions.RequestException 。
上傳文件
import requests
files={'file':open('a.jpg','rb')}
respone=requests.post('http://httpbin.org/post',files=files)
print(respone.status_code)
帶參數的GET請求->headers
通常我們在發送請求時都需要帶上請求頭,請求頭是將自身偽裝成瀏覽器的關鍵,常見的有用的請求頭如下
Host
Referer #大型網站通常都會根據該參數判斷請求的來源
User-Agent #客戶端
Cookie #Cookie信息雖然包含在請求頭里,但requests模塊有單獨的參數來處理他,headers={}內就不要放它了
添加headers(瀏覽器會識別請求頭,不加可能會被拒絕訪問,比如訪問https://www.zhihu.com/explore)
import requests
response=requests.get('https://www.zhihu.com/explore')
response.status_code #500
#自己定制headers
headers={
'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.76 Mobile Safari/537.36',
}
respone=requests.get('https://www.zhihu.com/explore',
headers=headers)
print(respone.status_code) #200
帶參數的GET請求->cookies
import requests
Cookies={ 'user_session':'rzNme4L6LTH7QSresq8w0BVYhTNt5GS-asNnkOe7_FZ2CjB6',
}
response=requests.get('https://github.com/settings/emails',
cookies=Cookies) #github對請求頭沒有什么限制,我們無需定制user-agent,對於其他網站可能還需要定制
print('306334678@qq.com' in response.text) #True
發送post請求,模擬瀏覽器的登錄行為
對於登錄來說,應該輸錯用戶名或密碼然后分析抓包流程,用腦子想一想,輸對了瀏覽器就跳轉了,還分析個毛線,累死你也找不到包
自動登錄gitbub(自己處理cookies))
'''
一 目標站點分析
瀏覽器輸入https://github.com/login
然后輸入錯誤的賬號密碼,抓包
發現登錄行為是post提交到:https://github.com/session
而且請求頭包含cookie
而且請求體包含:
commit:Sign in
utf8:✓
authenticity_token:lbI8IJCwGslZS8qJPnof5e7ZkCoSoMn6jmDTsL1r/m06NLyIbw7vCrpwrFAPzHMep3Tmf/TSJVoXWrvDZaVwxQ==
login:egonlin
password:123
二 流程分析
先GET:https://github.com/login拿到初始cookie與authenticity_token
返回POST:https://github.com/session, 帶上初始cookie,帶上請求體(authenticity_token,用戶名,密碼等)
最后拿到登錄cookie
ps:如果密碼時密文形式,則可以先輸錯賬號,輸對密碼,然后到瀏覽器中拿到加密后的密碼,github的密碼是明文
'''
import requests
import re
#第一次請求
r1=requests.get('https://github.com/login')
r1_cookie=r1.cookies.get_dict() #拿到初始cookie(未被授權)
authenticity_token=re.findall(r'name="authenticity_token".*?value="(.*?)"',r1.text)[0] #從頁面中拿到CSRF TOKEN
#第二次請求:帶着初始cookie和TOKEN發送POST請求給登錄頁面,帶上賬號密碼
data={
'commit':'Sign in',
'utf8':'✓',
'authenticity_token':authenticity_token,
'login':'317828332@qq.com',
'password':'alex3714'
}
r2=requests.post('https://github.com/session',
data=data,
cookies=r1_cookie
)
login_cookie=r2.cookies.get_dict()
#第三次請求:以后的登錄,拿着login_cookie就可以,比如訪問一些個人配置
r3=requests.get('https://github.com/settings/emails',
cookies=login_cookie)
print('317828332@qq.com' in r3.text) #True
requests.session()自動幫我們保存cookie信息
import requests
import re
session=requests.session()
#第一次請求
r1=session.get('https://github.com/login')
authenticity_token=re.findall(r'name="authenticity_token".*?value="(.*?)"',r1.text)[0] #從頁面中拿到CSRF TOKEN
#第二次請求
data={
'commit':'Sign in',
'utf8':'✓',
'authenticity_token':authenticity_token,
'login':'317828332@qq.com',
'password':'alex3714'
}
r2=session.post('https://github.com/session',
data=data,
)
#第三次請求
r3=session.get('https://github.com/settings/emails')
print('317828332@qq.com' in r3.text) #True
補充json
json的用法
requests.post(url='xxxxxxxx',
data={'xxx':'yyy'}) #沒有指定請求頭,#默認的請求頭:application/x-www-form-urlencoed
#如果我們自定義請求頭是application/json,並且用data傳值, 則服務端取不到值
requests.post(url='',
data={'':1,},
headers={
'content-type':'application/json'
})
requests.post(url='',
json={'':1,},
) #默認的請求頭:application/json
json的用法