Python爬蟲-04:貼吧爬蟲以及GET和POST的區別



1. URL的組成


漢字通過URL encode(UTF-8)編碼出來的編碼,里面的字符全是打字節

如果你復制粘貼下來這個網址,出來的不是漢字,而是編碼后的字節
https://www.baidu.com/s?wd=編程吧

我們也可以在python中做轉換-urllib.parse.urlencode

import urllib.parse.urlencode
url = "http://www.baidu.com/s?"
wd = {"wd": "編程吧"}
out = urllib.parse.urlencode(wd)
print(out)

結果是: wd=%E7%BC%96%E7%A8%8B%E5%90%A7

2. 貼吧爬蟲

2.1. 只爬貼吧第一頁

import urllib.parse
import urllib.request

url = "http://www.baidu.com/s?"
keyword = input("Please input query: ")

wd = {"wd": keyword}
wd = urllib.parse.urlencode(wd)

fullurl = url + "?" + wd
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"}
request = urllib.request.Request(fullurl, headers = headers)
response = urllib.request.urlopen(request)
html = response.read()

print(html)

2.2. 爬取所有貼吧的頁面

對於一個貼吧(編程吧)爬蟲,可以翻頁,我們可以總結規律

page 1: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=0 
page 2: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=50
page 3: http://tieba.baidu.com/f?kw=%E7%BC%96%E7%A8%8B&ie=utf-8&pn=100
import urllib.request
import urllib.parse

def loadPage(url,filename):
    """
        作用: url發送請求
        url:地址
        filename: 處理的文件名
    """
    print("正在下載", filename)
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"}
    request = urllib.request.Request(url, headers=headers)
    response = urllib.request.urlopen(request)
    html = response.read()
    return html



def writePage(html,filename):
    """
        作用:將html內容寫入到本地
        html:服務器響應文件內容
    """
    print("正在保存",filename)
    with open(filename, "wb") as f:
        f.write(html)
    print("-"*30)


def tiebaSpider(url, beginPage, endPage):
    """
        作用:貼吧爬蟲調度器,復制組合處理每個頁面的url
    """
    for page in range(beginPage, endPage + 1):
        pn = (page - 1) * 50
        filename = "第" + str(page) + "頁.html"
        fullurl = url + "&pn=" + str(pn)
        html = loadPage(fullurl,filename)
        writePage(html,filename)


if __name__ == "__main__":
    kw = input("Please input query: ")
    beginPage = int(input("Start page: "))
    endPage = int(input("End page: "))

    url = "http://tieba.baidu.com/f?"
    key = urllib.parse.urlencode({"kw":kw})
    fullurl = url + key
    tiebaSpider(fullurl, beginPage, endPage)

結果是:

Please input query: 編程吧
Start page: 1
End page: 5
正在下載 第1頁.html
正在保存 第1頁.html
------------------------------
正在下載 第2頁.html
正在保存 第2頁.html
------------------------------
正在下載 第3頁.html
正在保存 第3頁.html
------------------------------
正在下載 第4頁.html
正在保存 第4頁.html
------------------------------
正在下載 第5頁.html
正在保存 第5頁.html
------------------------------

3. GET和POST的區別


  • GET: 請求的url會附帶查詢參數
  • POST: 請求的url不會

3.1. GET請求

對於GET請求:查詢參數在QueryString里保存

3.2. POST請求

對於POST請求: 茶韻參數在WebForm里面



3.3. 有道翻譯模擬發送POST請求

  1. 首先我們用抓包工具獲取請求信息
POST http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null HTTP/1.1
Host: fanyi.youdao.com
Connection: keep-alive
Content-Length: 254
Accept: application/json, text/javascript, */*; q=0.01
Origin: http://fanyi.youdao.com
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Referer: http://fanyi.youdao.com/
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,en-CA;q=0.6
Cookie: OUTFOX_SEARCH_USER_ID=-1071824454@10.169.0.83; OUTFOX_SEARCH_USER_ID_NCOO=848207426.083082; JSESSIONID=aaaiYkBB5LZ2t6rO6rCGw; ___rl__test__cookies=1546662813170
x-hd-token: rent-your-own-vps
# 這一行是form表單數據,重要
i=love&from=AUTO&to=AUTO&smartresult=dict&client=fanyideskweb&salt=15466628131726&sign=63253c84e50c70b0125b869fd5e2936d&ts=1546662813172&bv=363eb5a1de8cfbadd0cd78bd6bd43bee&doctype=json&version=2.1&keyfrom=fanyi.web&action=FY_BY_REALTIME&typoResult=false
  1. 提取關鍵的表單數據
i=love
doctype=json
version=2.1
keyfrom=fanyi.web
action=FY_BY_REALTIME
typoResult=false  
  1. 有道翻譯模擬
import urllib.request
import urllib.parse

# 通過抓包方式獲取,並不是瀏覽器上面的URL地址
url = "http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null"

# 完整的headers
headers = {
    "Accept" : "application/json, text/javascript, */*; q=0.01",
    "X-Requested-With" : "XMLHttpRequest",
    "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
    "Content-Type" : "application/x-www-form-urlencoded; charset=UTF-8"
}

# 輸入用戶接口
key = input("Please input english: ")

# 模擬有道翻譯傳回的form數據
# 這是post發送到服務器的form數據,post是有數據提交到web服務器的,與服務器做一個交互,通過傳的數據返回響應的文件,而get不會發數據
formdata = {
    "i":key,
    "doctype":"json",
    "version":"2.1",
    "keyfrom":"fanyi.web",
    "action":"FY_BY_REALTIME",
    "typoResult": "false"
}

# 通過轉碼
data = urllib.parse.urlencode(formdata).encode("utf-8")
# 通過data和header數據,就可以構建post請求,data參數有值,就是POST,沒有就是GET
request = urllib.request.Request(url, data=data, headers=headers)
response = urllib.request.urlopen(request)
html = response.read()

print(html)

結果如下:

Please input english: hello
b'                          {"type":"EN2ZH_CN","errorCode":0,"elapsedTime":1,"translateResult":[[{"src":"hello","tgt":"\xe4\xbd\xa0\xe5\xa5\xbd"}]]}\n'


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM