python3爬蟲抓取智聯招聘職位信息代碼


上代碼,有問題歡迎留言指出。

# -*- coding: utf-8 -*-
"""
Created on Tue Aug  7 20:41:09 2018
@author: brave-man
blog: http://www.cnblogs.com/zrmw/
"""

import requests
from bs4 import BeautifulSoup
import json

def getDetails(url):
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0'}
    res = requests.get(url, headers = headers)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    soup = json.loads(str(soup))
    
    try:
        with open('jobDetails.txt', 'w') as f:
            print('創建 {} 文件成功'.format('jobDetails.txt'))
    except:
        print('failure')
    
    details = {}    
    for i in soup['data']['results']:
        jobName = i['jobName']
        salary = i['salary']
        company = i['company']['name']
        companyUrl = i['company']['url']
        positionURL = i['positionURL']
        details = {'jobName': jobName,
                   'salary': salary,
                   'company': company,
                   'companyUrl': companyUrl,
                   'positionURL': positionURL
                   }
#        print(details)
        toFile(details)

def toFile(d):
    dj = json.dumps(d)
    try:
        with open('jobDetails.txt', 'a') as f:
            f.write(dj)
#            print('sucessful')
    except:
        print('Error')

def main():
    url = 'https://fe-api.zhaopin.com/c/i/sou?pageSize=60&cityId=635&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=python&kt=3&lastUrlQuery={"jl":"635","kw":"python","kt":"3"}'
    getDetails(url)

if __name__ == "__main__":
    main()

執行完上述代碼后,會在代碼同目錄下創建一個保存職位信息的txt文件,jobDetails.txt。

這只是獲取一頁招聘信息的代碼,后續會添加,如何獲取url和所有頁的招聘信息的代碼。

智聯招聘網站還是有一點點小坑的,就是不是所有的招聘職位詳情頁面都是使用智聯的官網格式,點開某個招聘職位之后,鏈接定向到某公司官網的招聘網站上,后面遇到的時候會具體處理。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM