用python寫一個爬蟲——爬取性感小姐姐


忍着鼻血寫代碼

今天寫一個簡單的網上爬蟲,爬取一個叫妹子圖的網站里面所有妹子的圖片。

然后試着先爬取了三頁,大概有七百多張圖片吧!各個誘人的很,有興趣的同學可以一起來爬一下,大佬級程序員勿噴,簡單爬蟲。

廢話不多說 直接上代碼

 

網站地址:http://www.meizitu.com/a/more_1.html

 

from bs4 import BeautifulSoup
import random,os,requests

headers = {
    'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:58.0) Gecko/20100101",
    'Referer': "http://i.meizitu.net"
}

def home_page(num,num2,headers):
    list_url = []
    for num in range(num,num2+1):
        url = "http://www.meizitu.com/a/more_%d.html"%num
        req = requests.get(url,headers=headers)
        req.encoding = req.apparent_encoding
        html = req.text
        bf = BeautifulSoup(html,'lxml')
        targets_url = bf.find_all(class_="pic")
        for each in targets_url:
            list_url.append(each.a.get('href'))
    return list_url


def deal_page(headers,list_url):
    list_url2 = []
    for targets_url2 in list_url:
        req = requests.get(targets_url2,headers=headers)
        req.encoding = "utf-8"
        html2 = req.text
        bf2 =  BeautifulSoup(html2,'lxml')
        targets_url3 = bf2.find_all(id="picture")
        # print(targets_url3)
        list_url2.append(targets_url3)
    return list_url2

def download(headers,list_url2):
    list_url3 = []
    # ================================
    print(list_url2)
    import re
    urls = re.findall(r'http.*?jpg',str(list_url2))
    print(urls,len(urls))
    for endurl in urls:
        filename = (endurl.split('/')[-3]) + (endurl.split('/')[-2]) +(endurl.split('/')[-1])
        print(endurl)
        print(filename)
        req3 = requests.get(endurl, headers=headers)
        root = "//Users//apple//Desktop//meizitu//"
        path = root + str(random.randrange(10000)) + filename
        if not os.path.exists(path):
            with open(path, 'wb') as f:
                f.write(req3.content)
            f.close()
            print("下載完成")

if __name__ == '__main__':
    num = int(input("請輸入要爬取的起始頁:"))
    num2 = int(input("請輸入終止頁:"))
    a = home_page(num,num2,headers)
    b = deal_page(headers, a)
    download(headers, b)

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM