第3次作業-MOOC學習筆記:Python網絡爬蟲與信息提取


1.注冊中國大學MOOC

2.選擇北京理工大學嵩天老師的《Python網絡爬蟲與信息提取》MOOC課程

3.學習完成第0周至第4周的課程內容,並完成各周作業

Requests庫的爬取性能分析

(1)京東商品頁面的爬取

import requests
url = "https://item.jd.com/2967929.html"
try:
    r = requests.get(url)
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    print(r.text[:1000])
except:
    print("爬取失敗")

(2)亞馬遜商品頁面的爬取

import requests
url = "https://www.amazon.cn/gp/product/B01M8L5Z3Y"
try:
    kv = {'user-agent':'Mozilla/5.0'}
    r = requests.get(url,headers=kv)
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    print(r.text[0:500])
except:
    print("爬取失敗")  

(3)搜索引擎提交接口

(百度)(360)

import requests
keyword = "python"
try:
    kv = {'wd':keyword}
    r = requests.get("http://www.so.com/s",params=kv)
    #r = requests.get("http://www.baidu.com/s",params=kv)
    print(r.request.url)
    r.raise_for_status()
    #r.encoding = r.apparent_encoding
    print(len(r.text))
except:
    print("爬取失敗")

(4)網絡圖片的爬取和儲存

import requests
import os
url = "http://image.nationalgeographic.com.cn/2017/0211/20170211061910157.jpg"
root = "E://pics//"
path = root + url.split('/')[-1]
try:
    if not os.path.exitsis(root):
        os.mkdir(root)
    if not os.path.exitsis(root):
        r = requests.get(url)
        with open(path, 'wb') as f:
            f.write(r.content)
            f.close()
            print("文件保存成功")
    else:
        print("文件已存在")
except:
    print("爬取失敗")
  

(5)ip地址歸屬地查詢

import requests
url = "http://m.ip138.com/ip.asp?ip="
try:
    r = requests.get(url+'202.204.80.112')
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    print(r.text[-500:])
except:
    print("爬取失敗")

(6)100次測試成功所需時間

import requests

def getHTMLText(url):
    try:
        r=requests.get(url,timeout=30)
        r.raise_for_status() #如果狀態不是200,引發HTTPError異常
        r.encoding = r.apparent_encoding
        return r.text
    except:
        return '產生異常'

def time_count(url):
    import time
    time_start= time.time()
    count=1
    while True:
        a=getHTMLText(url)
        if a != '產生異常':
            print('第{}次爬取成功'.format(count))
            count+=1
            if count == 101:
                break
    time_end= time.time()
    print('100次測試成功所需時間',time_end-time_start,'s')
   
 if __name__=='__main__':
    url = 'https://www.baidu.com'
    time_count(url)

 

 

 中國大學排名定向爬蟲(優化)

import requests
from bs4 import BeautifulSoup
import bs4
 
def getHTMLText(url):
    try:
        r = requests.get(url, timeout=30)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        return ""
 
def fillUnivList(ulist, html):
    soup = BeautifulSoup(html, "html.parser")
    for tr in soup.find('tbody').children:
        if isinstance(tr, bs4.element.Tag):
            tds = tr('td')
            ulist.append([tds[0].string, tds[1].string, tds[3].string])
 
def printUnivList(ulist, num):
    tplt = "{0:^10}\t{1:{3}^10}\t{2:^10}"
    print(tplt.format("排名","學校名稱","總分",chr(12288)))
    for i in range(num):
        u=ulist[i]
        print(tplt.format(u[0],u[1],u[2],chr(12288)))
     
def main():
    uinfo = []
    url = 'https://www.zuihaodaxue.cn/zuihaodaxuepaiming2016.html'
    html = getHTMLText(url)
    fillUnivList(uinfo, html)
    printUnivList(uinfo, 20) # 20 univs
main()

 

淘寶商品比價定向爬蟲:

import requests
import re
 
def getHTMLText(url):
    try:
        r = requests.get(url, timeout=30)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        return ""
     
def parsePage(ilt, html):
    try:
        plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"',html)
        tlt = re.findall(r'\"raw_title\"\:\".*?\"',html)
        for i in range(len(plt)):
            price = eval(plt[i].split(':')[1])
            title = eval(tlt[i].split(':')[1])
            ilt.append([price , title])
    except:
        print("")
 
def printGoodsList(ilt):
    tplt = "{:4}\t{:8}\t{:16}"
    print(tplt.format("序號", "價格", "商品名稱"))
    count = 0
    for g in ilt:
        count = count + 1
        print(tplt.format(count, g[0], g[1]))
         
def main():
    goods = '書包'
    depth = 3
    start_url = 'https://s.taobao.com/search?q=' + goods
    infoList = []
    for i in range(depth):
        try:
            url = start_url + '&s=' + str(44*i)
            html = getHTMLText(url)
            parsePage(infoList, html)
        except:
            continue
    printGoodsList(infoList)
     
main() 

股票數據定向爬蟲(優化):

import requests
from bs4 import BeautifulSoup
import traceback
import re
 
def getHTMLText(url, code="utf-8"):
    try:
        r = requests.get(url)
        r.raise_for_status()
        r.encoding = code
        return r.text
    except:
        return ""
 
def getStockList(lst, stockURL):
    html = getHTMLText(stockURL, "GB2312")
    soup = BeautifulSoup(html, 'html.parser') 
    a = soup.find_all('a')
    for i in a:
        try:
            href = i.attrs['href']
            lst.append(re.findall(r"[s][hz]\d{6}", href)[0])
        except:
            continue
 
def getStockInfo(lst, stockURL, fpath):
    count = 0
    for stock in lst:
        url = stockURL + stock + ".html"
        html = getHTMLText(url)
        try:
            if html=="":
                continue
            infoDict = {}
            soup = BeautifulSoup(html, 'html.parser')
            stockInfo = soup.find('div',attrs={'class':'stock-bets'})
 
            name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
            infoDict.update({'股票名稱': name.text.split()[0]})
             
            keyList = stockInfo.find_all('dt')
            valueList = stockInfo.find_all('dd')
            for i in range(len(keyList)):
                key = keyList[i].text
                val = valueList[i].text
                infoDict[key] = val
             
            with open(fpath, 'a', encoding='utf-8') as f:
                f.write( str(infoDict) + '\n' )
                count = count + 1
                print("\r當前進度: {:.2f}%".format(count*100/len(lst)),end="")
        except:
            count = count + 1
            print("\r當前進度: {:.2f}%".format(count*100/len(lst)),end="")
            continue
 
def main():
    stock_list_url = 'https://quote.eastmoney.com/stocklist.html'
    stock_info_url = 'https://gupiao.baidu.com/stock/'
    output_file = 'D:/BaiduStockInfo.txt'
    slist=[]
    getStockList(slist, stock_list_url)
    getStockInfo(slist, stock_info_url, output_file)
 
main()

 股票數據Scrapy爬蟲

# -*- coding: utf-8 -*-
import scrapy
import re
 
 
class StocksSpider(scrapy.Spider):
    name = "stocks"
    start_urls = ['https://quote.eastmoney.com/stocklist.html']
 
    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}", href)[0]
                url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
                yield scrapy.Request(url, callback=self.parse_stock)
            except:
                continue
 
    def parse_stock(self, response):
        infoDict = {}
        stockInfo = response.css('.stock-bets')
        name = stockInfo.css('.bets-name').extract()[0]
        keyList = stockInfo.css('dt').extract()
        valueList = stockInfo.css('dd').extract()
        for i in range(len(keyList)):
            key = re.findall(r'>.*</dt>', keyList[i])[0][1:-5]
            try:
                val = re.findall(r'\d+\.?.*</dd>', valueList[i])[0][0:-5]
            except:
                val = '--'
            infoDict[key]=val
 
        infoDict.update(
            {'股票名稱': re.findall('\s.*\(',name)[0].split()[0] + \
             re.findall('\>.*\<', name)[0][1:-1]})
        yield infoDict

  

# -*- coding: utf-8 -*-
 
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
 
 
class BaidustocksPipeline(object):
    def process_item(self, item, spider):
        return item
 
class BaidustocksInfoPipeline(object):
    def open_spider(self, spider):
        self.f = open('BaiduStockInfo.txt', 'w')
 
    def close_spider(self, spider):
        self.f.close()
 
    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + '\n'
            self.f.write(line)
        except:
            pass
        return item

settings.py文件中被修改的區域:  

# Configure item pipelines
# See https://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}

  

4.提供圖片或網站顯示的學習進度,證明學習的過程。

 

5.寫一篇不少於1000字的學習筆記,談一下學習的體會和收獲。

  剛開始對爬蟲僅停留在基礎的位置,並不是很了解。但通過爬蟲的這門入門啟蒙課程,個人覺得非常適合python初學者,復習掌握python基本編程語法后,就可以開始學習這門課程,通過幾周嵩老師的講解和編寫爬蟲實例,驚嘆python語言的魅力所在,高效與快捷增加了我對python爬蟲深入學習的興趣。

  一開始需要環境配置,安裝各種第三方模塊等等,有些東西看懂了,但結果自己寫代碼還是很困難,所以其實個人覺得盡量不要系統地去啃一些東西,根據嵩老師課程上的實例,舉一反三,找一些其他的例子入手,這樣反而更容易掌握。因為爬蟲這種技術,既不需要系統的精通一門語言,也不需要多么高深的數據庫技術,從實操中去學習python中零散的知識,可能可以保證每次學到的都是最需要的部分。

     在此次課程的學習中特別注意到一個修改User-Agent爬蟲防屏蔽策略。User-Agent是一種最常見的偽裝瀏覽器的手段。User-Agent是指包含瀏覽器信息、操作系統信息等的一個字符串,也稱之為一種特殊的網絡協議。服務器通過它判斷當前訪問對象是瀏覽器、郵件客戶端還是網絡爬蟲。在request.headers里可以查看user-agent,關於怎么分析數據包、查看其User-Agent等信息,這個在前面的文章里提到過。具體方法可以把User-Agent的值改為瀏覽器的方式,甚至可以設置一個User-Agent池(list,數組,字典都可以),存放多個“瀏覽器”,每次爬取的時候隨機取一個來設置request的User-Agent,這樣User-Agent會一直在變化,防止被牆。在爬取中國大學排名出現的問題,,用requests和BeautifulSoup庫是無法獲取它的信息的,其次還要網站robots協議是否符合相關規定。爬取總體分成三個步驟:從網絡上獲取大學排名網頁內容,定義函數數:getHTMLText();提取網頁中信息並放到合適的數據結構定義函數:fillUnivList();利用數據結構展示並輸出結果,定義函數:printUnivList()有了這三個函數,我們可以把程序封裝成這三個模塊,可讀性更好。 

    使用bs4進行xml解析時,由於每個節點屬性不完全相同,當統一使用一個方法訪問節點屬性的時候一定要加try,防止程序意外中斷;在使用python語言的時候,為了安全,要注意函數的返回值,特別是類型判斷;網頁抓取要用try,動態數據類型盡量也要。對於url請求分析有三點,認真分析頁面結構,查看js響應的動作;借助瀏覽器分析js點擊動作所發出的請求url;將此異步請求的url作為scrapy再次進行抓取。

     課程上最后一周還講了Scrapy框架,它是Python開發的一個快速、高層次的屏幕抓取和web抓取框架,用於抓取web站點並從頁面中提取結構化的數據。Scrapy用途廣泛,可以用於數據挖掘、監測和自動化測試。Scrapy吸引人的地方在於它是一個框架,任何人都可以根據需求方便的修改。它也提供了多種類型爬蟲的基類。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM