一、选题的背景
为什么要选择此选题?要达到的数据分析的预期目标是什么?(10 分)
众多的招聘岗位中,大数据岗位分布在全国各个城市,岗位与企业之间又有着错综复杂的联系,企业类型多样,不同的企业有着各自不同的文化,对应聘者也有着不同约束。应聘者不同经验获得的薪资也不一样,找到符合自己的职位,需要考虑招聘者发布的基本要求,如:经验,学历等各方面的需求。应聘者也会考查企业性质和类型。以下我们对发布求职公司进行分析。
二、主题式网络爬虫设计方案(10 分)
1.主题式网络爬虫名称
Python--智联招聘网站的数据分析
2.主题式网络爬虫爬取的内容与数据特征分析
爬取智联网络界面信息,选取
从数据库中导入数据
数据清理,包括缺失值处理等数据预处理
数据分析与数据可视化3.1 平均工资3.2 工资与工作经验的关系3.3 工资与学历的关系3.4 职位描述文本分析
3.主题式网络爬虫设计方案概述(包括实现思路与技术难点)
本文预爬取的字段包括:(1)职位信息 (2)工资 (3)所在城市 (4)工作经验 (5)学历要求 (6)招聘人数 (7)职位亮点 (8) 职位描述 (9)公司地址 (10)公司名称 (11)公司行业所属 (12) 公司规模 (13)公司简要描述
三、主题页面的结构特征分析(10 分)
1.主题页面的结构与特征分析
首页和他的布局结构
2.Htmls 页面解析
3.节点(标签)查找方法与遍历方法
(必要时画出节点树结构)
四、网络爬虫程序设计(60 分)
爬虫程序主体要包括以下各部分,要附源代码及较详细注释,并在每部分程序后
面提供输出结果的截图。
- 数据爬取与采集
1 #!/usr/bin/python3 2 # -*- coding: utf-8 -*- 3 """ 4 Created on Fri Aug 14 17:47:47 2020: 2021/3/30 上午1:13 5 @Author : liudong 6 @Software: PyCharm 7 """ 8 9 import requests 10 import re 11 from copyheaders import headers_raw_to_dict 12 from bs4 import BeautifulSoup 13 import pandas as pd 14 15 16 # 根据url和参数获取网页的HTML: 17 18 def get_html(url, params): 19 20 my_headers = b''' 21 accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 22 accept-language: zh-CN,zh;q=0.9 23 cache-control: max-age=0 24 cookie: x-zp-client-id=448f2b96-6b3a-48e3-e912-e6c8dd73e6cb; adfbid=0; adfbid2=0; Hm_lvt_38ba284938d5eddca645bb5e02a02006=1617108464; sajssdk_2015_cross_new_user=1; sts_deviceid=178832cf3f2680-0b20242883a4a9-6618207c-1296000-178832cf3f3780; sts_sg=1; sts_chnlsid=Unknown; zp_src_url=https%3A%2F%2Fwww.google.com.hk%2F; FSSBBIl1UgzbN7N443S=kc8_mcJe5xsW.UilCMHXpkoWeyQ8te3q7QhYV8Y8aA0Se9k9JJXcnQVvrOJ9NYDP; locationInfo_search={%22code%22:%22538%22%2C%22name%22:%22%E4%B8%8A%E6%B5%B7%22%2C%22message%22:%22%E5%8C%B9%E9%85%8D%E5%88%B0%E5%B8%82%E7%BA%A7%E7%BC%96%E7%A0%81%22}; zp_passport_deepknow_sessionId=a2ea7206sade7641768f38078ea6b45afef0; at=02a0ea392e1d4fd6a4d6003ac136aae0; rt=82f98e13344843d6b5bf3dadf38e8bb2; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%221071739258%22%2C%22first_id%22%3A%22178832cf3bd20f-0be4af1633ae3d-6618207c-1296000-178832cf3be4b8%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%7D%2C%22%24device_id%22%3A%22178832cf3bd20f-0be4af1633ae3d-6618207c-1296000-178832cf3be4b8%22%7D; urlfrom=121126445; urlfrom2=121126445; adfcid=none; adfcid2=none; ZL_REPORT_GLOBAL={%22//www%22:{%22seid%22:%2202a0ea392e1d4fd6a4d6003ac136aae0%22%2C%22actionid%22:%2243ffc74e-c32e-42ee-ba04-1e24611fecde-cityPage%22}}; LastCity=%E4%B8%8A%E6%B5%B7; LastCity%5Fid=538; Hm_lpvt_38ba284938d5eddca645bb5e02a02006=1617111259; zpfe_probe_token=ae612f12s0feb44ac697a7434fe1f22af086; d4d6cd0b4a19fa72b8cc377185129bb7=ab637759-b57a-4214-a915-8dcbc5630065; selectCity_search=538; FSSBBIl1UgzbN7N443T=5pRoIYmxrZTzxVozDFEYjcClKKRpXbK9zf0gYH4zU5AyLqGUMT5fnVzyE0SMv7ZDGFLY0HV8o6iXLPBGBBTJhDhz3TIaQ3omm324Q2m4BSJzD0VgZzesPGIXudf636xQZkuag1QJmdqzgFLv6YPcKq.ukZPymp1IazfsOec5vBcMT9yemSrYb9UBk2XF.rZIeM3mIOBqpNii26kDRzjxHP5TsGLJzWaaZvklHnh61NT4acHPQt3Lq1.w2X4htg9ck.uGhzHt9w954igFEqhLCmggLi9OjPUaiU8TA4yn1oR1T5Qmjm1I5AA0PIu76e0T2u6w2f7thMkv6E7lkoDggrRMta0Z_uVEP3Y1sS8hJw7ycE2PTVtVassRyoN6UuTBHtSZ 25 sec-ch-ua: "Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99" 26 sec-ch-ua-mobile: ?0 27 sec-fetch-dest: document 28 sec-fetch-mode: navigate 29 sec-fetch-site: same-origin 30 sec-fetch-user: ?1 31 upgrade-insecure-requests: 1 32 user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 33 ''' 34 my_headers = headers_raw_to_dict(my_headers) # 把复制的浏览器需求头转化为字典形式 35 req = requests.get(url, headers=my_headers, params=params) 36 req.encoding = req.apparent_encoding 37 html = req.text 38 39 return html 40 41 42 # 输入url和城市编号,获取由所有职位信息的html标签的字符串组成的列表: 43 44 def get_html_list(url, city_num): 45 46 html_list = list() 47 48 for i in range(1, 12): 49 params = {'jl': str(city_num), 'kw': '数据分析师', 'p': str(i)} 50 html = get_html(url, params) 51 soup = BeautifulSoup(html, 'html.parser') 52 html_list += soup.find_all(name='a', attrs={'class': 'joblist-box__iteminfo iteminfo'}) 53 54 for i in range(len(html_list)): 55 html_list[i] = str(html_list[i]) 56 57 return html_list 58 59 60 # 根据上面的HTML标签列表,把每个职位信息的有效数据提取出来,保存csv文件: 61 62 def get_csv(html_list): 63 64 # city = position = company_name = company_size = company_type = salary = education = ability = experience = evaluation = list() # 65 # 上面赋值方法在这里是错误的,它会让每个变量指向同一内存地址,如果改变其中一个变量,其他变量会同时发生改变 66 67 # table = pd.DataFrame(columns = ['城市','职位名称','公司名称','公司规模','公司类型','薪资','学历要求','技能要求','工作经验要求']) 68 city, position, company_name, company_size, company_type, salary, education, ability, experience = ([] for i in range(9)) # 多变量一次赋值 69 70 for i in html_list: 71 72 if re.search( 73 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 74 i): 75 s = re.search( 76 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 77 i).group(1) 78 city.append(s) 79 s = re.search( 80 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 81 i).group(2) 82 experience.append(s) 83 s = re.search( 84 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 85 i).group(3) 86 education.append(s) 87 else: 88 city.append(' ') 89 experience.append(' ') 90 education.append(' ') 91 92 93 if re.search('<span class="iteminfo__line1__jobname__name" title="(.*?)">', i): 94 s = re.search('<span class="iteminfo__line1__jobname__name" title="(.*?)">', i).group(1) 95 position.append(s) 96 else: 97 position.append(' ') 98 99 if re.search('<span class="iteminfo__line1__compname__name" title="(.*?)">', i): 100 s = re.search('<span class="iteminfo__line1__compname__name" title="(.*?)">', i).group(1) 101 company_name.append(s) 102 else: 103 company_name.append(' ') 104 105 if re.search( 106 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 107 i): 108 s = re.search( 109 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 110 i).group(1) 111 company_type.append(s) 112 s = re.search( 113 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 114 i).group(2) 115 company_size.append(s) 116 else: 117 company_type.append(' ') 118 company_size.append(' ') 119 120 if re.search('<p class="iteminfo__line2__jobdesc__salary">([\s\S]*?)<', i): 121 s = re.search('<p class="iteminfo__line2__jobdesc__salary">([\s\S]*?)<', i).group(1) 122 s = s.strip() 123 salary.append(s) 124 else: 125 salary.append(' ') 126 127 s = str() 128 l = re.findall('<div class="iteminfo__line3__welfare__item">.*?</div>', i) 129 for i in l: 130 s = s + re.search('<div class="iteminfo__line3__welfare__item">(.*?)</div>', i).group(1) + ' ' 131 ability.append(s) 132 133 table = list(zip(city, position, company_name, company_size, company_type, salary, education, ability, experience)) 134 135 return table 136 137 138 139 if __name__ == '__main__': 140 141 url = 'https://sou.zhaopin.com/' 142 citys = {'上海':538, '北京':530, '广州':763, '深圳':765, '天津':531, '武汉':736, '西安':854, '成都':801, '南京':635, '杭州':653, '重庆':551, '厦门':682} 143 for i in citys.keys(): 144 html_list = get_html_list(url, citys[i]) 145 table = get_csv(html_list) 146 df = pd.DataFrame(table, columns=['city', 'position', 'company_name', 'company_size', 'company_type', 'salary', 147 'education', 'ability', 'experience']) 148 file_name = i + '.csv' 149 df.to_csv(file_name)
- 对数据进行清洗和处理
1 #!/usr/bin/python3 2 # -*- coding: utf-8 -*- 3 """ 4 Created on Fri Aug 14 17:47:47 2020: 2021/4/2 上午1:30 5 @Author : liudong 6 @Software: PyCharm 7 """ 8 9 10 import matplotlib.pyplot as plt 11 import numpy as np 12 import pandas as pd 13 plt.rcParams['font.sans-serif'] = ['Heiti TC'] # 指定默认字体:解决plot不能显示中文问题 14 plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题 15 import re 16 import os 17 import seaborn as sns 18 from wordcloud import WordCloud 19 20 21 citys = ['上海', '北京', '广州', '深圳', '天津', '武汉', '西安', '成都', '南京', '杭州', '重庆', '厦门'] 22 23 24 #数据清洗: 25 26 def data_clear(): 27 28 for i in citys: 29 30 file_name = './' + i + '.csv' 31 df = pd.read_csv(file_name, index_col = 0) 32 33 for i in range(0, df.shape[0]): 34 35 s = df.loc[[i],['salary']].values.tolist()[0][0] 36 37 if re.search('(.*)-(.*)',s): 38 a = re.search('(.*)-(.*)', s).group(1) 39 if a[-1] == '千': 40 a = eval(a[0:-1]) * 1000 41 elif a[-1] == '万': 42 a = eval(a[0:-1]) * 10000 43 b = re.search('(.*)-(.*)', s).group(2) 44 if b[-1] == '千': 45 b = eval(b[0:-1]) * 1000 46 elif b[-1] == '万': 47 b = eval(b[0:-1]) * 10000 48 s = (a + b) / 2 49 df.loc[[i], ['salary']] = s 50 else: 51 df.loc[[i], ['salary']] = '' 52 53 os.remove(file_name) 54 df.to_csv(file_name)
1.数据分析与可视化(例如:数据柱形图、直方图、散点图、盒图、分布图)
1 #各个城市数据分析职位数量条形图: 2 3 def citys_jobs(): 4 5 job_num = list() 6 for i in citys: 7 file_name = './' + i + '.csv' 8 df = pd.read_csv(file_name, index_col = 0) 9 job_num.append(df.shape[0]) 10 df = pd.DataFrame(list(zip(citys, job_num))) 11 df = df.sort_values(1, ascending = False) 12 x = list(df[0]) 13 y = list(df[1]) 14 15 fig = plt.figure(dpi=200) 16 ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) 17 ax.bar(x,y,alpha = 0.8) 18 ax.set_title('数据分析职位在全国主要城市的数量分布') 19 ax.set_ylim(0,350) 20 21 plt.savefig('./数据分析职位在全国主要城市的数量分布.jpg') 22 plt.show()
5.根据数据之间的关系,分析两个变量之间的相关系数,画出散点图,并建立变
量之间的回归方程(一元或多元)。
1.将以上各部分的代码汇总,附上完整程序代码
1 #!/usr/bin/python3 2 # -*- coding: utf-8 -*- 3 """ 4 Created on Fri Aug 14 17:47:47 2020: 2021/3/30 上午1:13 5 @Author : liudong 6 @Software: PyCharm 7 """ 8 9 import requests 10 import re 11 from copyheaders import headers_raw_to_dict 12 from bs4 import BeautifulSoup 13 import pandas as pd 14 15 16 # 根据url和参数获取网页的HTML: 17 18 def get_html(url, params): 19 20 my_headers = b''' 21 accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 22 accept-language: zh-CN,zh;q=0.9 23 cache-control: max-age=0 24 cookie: x-zp-client-id=448f2b96-6b3a-48e3-e912-e6c8dd73e6cb; adfbid=0; adfbid2=0; Hm_lvt_38ba284938d5eddca645bb5e02a02006=1617108464; sajssdk_2015_cross_new_user=1; sts_deviceid=178832cf3f2680-0b20242883a4a9-6618207c-1296000-178832cf3f3780; sts_sg=1; sts_chnlsid=Unknown; zp_src_url=https%3A%2F%2Fwww.google.com.hk%2F; FSSBBIl1UgzbN7N443S=kc8_mcJe5xsW.UilCMHXpkoWeyQ8te3q7QhYV8Y8aA0Se9k9JJXcnQVvrOJ9NYDP; locationInfo_search={%22code%22:%22538%22%2C%22name%22:%22%E4%B8%8A%E6%B5%B7%22%2C%22message%22:%22%E5%8C%B9%E9%85%8D%E5%88%B0%E5%B8%82%E7%BA%A7%E7%BC%96%E7%A0%81%22}; zp_passport_deepknow_sessionId=a2ea7206sade7641768f38078ea6b45afef0; at=02a0ea392e1d4fd6a4d6003ac136aae0; rt=82f98e13344843d6b5bf3dadf38e8bb2; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%221071739258%22%2C%22first_id%22%3A%22178832cf3bd20f-0be4af1633ae3d-6618207c-1296000-178832cf3be4b8%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%7D%2C%22%24device_id%22%3A%22178832cf3bd20f-0be4af1633ae3d-6618207c-1296000-178832cf3be4b8%22%7D; urlfrom=121126445; urlfrom2=121126445; adfcid=none; adfcid2=none; ZL_REPORT_GLOBAL={%22//www%22:{%22seid%22:%2202a0ea392e1d4fd6a4d6003ac136aae0%22%2C%22actionid%22:%2243ffc74e-c32e-42ee-ba04-1e24611fecde-cityPage%22}}; LastCity=%E4%B8%8A%E6%B5%B7; LastCity%5Fid=538; Hm_lpvt_38ba284938d5eddca645bb5e02a02006=1617111259; zpfe_probe_token=ae612f12s0feb44ac697a7434fe1f22af086; d4d6cd0b4a19fa72b8cc377185129bb7=ab637759-b57a-4214-a915-8dcbc5630065; selectCity_search=538; FSSBBIl1UgzbN7N443T=5pRoIYmxrZTzxVozDFEYjcClKKRpXbK9zf0gYH4zU5AyLqGUMT5fnVzyE0SMv7ZDGFLY0HV8o6iXLPBGBBTJhDhz3TIaQ3omm324Q2m4BSJzD0VgZzesPGIXudf636xQZkuag1QJmdqzgFLv6YPcKq.ukZPymp1IazfsOec5vBcMT9yemSrYb9UBk2XF.rZIeM3mIOBqpNii26kDRzjxHP5TsGLJzWaaZvklHnh61NT4acHPQt3Lq1.w2X4htg9ck.uGhzHt9w954igFEqhLCmggLi9OjPUaiU8TA4yn1oR1T5Qmjm1I5AA0PIu76e0T2u6w2f7thMkv6E7lkoDggrRMta0Z_uVEP3Y1sS8hJw7ycE2PTVtVassRyoN6UuTBHtSZ 25 sec-ch-ua: "Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99" 26 sec-ch-ua-mobile: ?0 27 sec-fetch-dest: document 28 sec-fetch-mode: navigate 29 sec-fetch-site: same-origin 30 sec-fetch-user: ?1 31 upgrade-insecure-requests: 1 32 user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 33 ''' 34 my_headers = headers_raw_to_dict(my_headers) # 把复制的浏览器需求头转化为字典形式 35 req = requests.get(url, headers=my_headers, params=params) 36 req.encoding = req.apparent_encoding 37 html = req.text 38 39 return html 40 41 42 # 输入url和城市编号,获取由所有职位信息的html标签的字符串组成的列表: 43 44 def get_html_list(url, city_num): 45 46 html_list = list() 47 48 for i in range(1, 12): 49 params = {'jl': str(city_num), 'kw': '数据分析师', 'p': str(i)} 50 html = get_html(url, params) 51 soup = BeautifulSoup(html, 'html.parser') 52 html_list += soup.find_all(name='a', attrs={'class': 'joblist-box__iteminfo iteminfo'}) 53 54 for i in range(len(html_list)): 55 html_list[i] = str(html_list[i]) 56 57 return html_list 58 59 60 # 根据上面的HTML标签列表,把每个职位信息的有效数据提取出来,保存csv文件: 61 62 def get_csv(html_list): 63 64 # city = position = company_name = company_size = company_type = salary = education = ability = experience = evaluation = list() # 65 # 上面赋值方法在这里是错误的,它会让每个变量指向同一内存地址,如果改变其中一个变量,其他变量会同时发生改变 66 67 # table = pd.DataFrame(columns = ['城市','职位名称','公司名称','公司规模','公司类型','薪资','学历要求','技能要求','工作经验要求']) 68 city, position, company_name, company_size, company_type, salary, education, ability, experience = ([] for i in range(9)) # 多变量一次赋值 69 70 for i in html_list: 71 72 if re.search( 73 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 74 i): 75 s = re.search( 76 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 77 i).group(1) 78 city.append(s) 79 s = re.search( 80 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 81 i).group(2) 82 experience.append(s) 83 s = re.search( 84 '<li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li> <li class="iteminfo__line2__jobdesc__demand__item">(.*?)</li>', 85 i).group(3) 86 education.append(s) 87 else: 88 city.append(' ') 89 experience.append(' ') 90 education.append(' ') 91 92 93 if re.search('<span class="iteminfo__line1__jobname__name" title="(.*?)">', i): 94 s = re.search('<span class="iteminfo__line1__jobname__name" title="(.*?)">', i).group(1) 95 position.append(s) 96 else: 97 position.append(' ') 98 99 if re.search('<span class="iteminfo__line1__compname__name" title="(.*?)">', i): 100 s = re.search('<span class="iteminfo__line1__compname__name" title="(.*?)">', i).group(1) 101 company_name.append(s) 102 else: 103 company_name.append(' ') 104 105 if re.search( 106 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 107 i): 108 s = re.search( 109 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 110 i).group(1) 111 company_type.append(s) 112 s = re.search( 113 '<span class="iteminfo__line2__compdesc__item">(.*?) </span> <span class="iteminfo__line2__compdesc__item">(.*?) </span>', 114 i).group(2) 115 company_size.append(s) 116 else: 117 company_type.append(' ') 118 company_size.append(' ') 119 120 if re.search('<p class="iteminfo__line2__jobdesc__salary">([\s\S]*?)<', i): 121 s = re.search('<p class="iteminfo__line2__jobdesc__salary">([\s\S]*?)<', i).group(1) 122 s = s.strip() 123 salary.append(s) 124 else: 125 salary.append(' ') 126 127 s = str() 128 l = re.findall('<div class="iteminfo__line3__welfare__item">.*?</div>', i) 129 for i in l: 130 s = s + re.search('<div class="iteminfo__line3__welfare__item">(.*?)</div>', i).group(1) + ' ' 131 ability.append(s) 132 133 table = list(zip(city, position, company_name, company_size, company_type, salary, education, ability, experience)) 134 135 return table 136 137 138 139 if __name__ == '__main__': 140 141 url = 'https://sou.zhaopin.com/' 142 citys = {'上海':538, '北京':530, '广州':763, '深圳':765, '天津':531, '武汉':736, '西安':854, '成都':801, '南京':635, '杭州':653, '重庆':551, '厦门':682} 143 for i in citys.keys(): 144 html_list = get_html_list(url, citys[i]) 145 table = get_csv(html_list) 146 df = pd.DataFrame(table, columns=['city', 'position', 'company_name', 'company_size', 'company_type', 'salary', 147 'education', 'ability', 'experience']) 148 file_name = i + '.csv' 149 df.to_csv(file_name)
1 #!/usr/bin/python3 2 # -*- coding: utf-8 -*- 3 """ 4 Created on Fri Aug 14 17:47:47 2020: 2021/4/2 上午1:30 5 @Author : liudong 6 @Software: PyCharm 7 """ 8 9 10 import matplotlib.pyplot as plt 11 import numpy as np 12 import pandas as pd 13 plt.rcParams['font.sans-serif'] = ['Heiti TC'] # 指定默认字体:解决plot不能显示中文问题 14 plt.rcParams['axes.unicode_minus'] = False # 解决保存图像是负号'-'显示为方块的问题 15 import re 16 import os 17 import seaborn as sns 18 from wordcloud import WordCloud 19 20 21 citys = ['上海', '北京', '广州', '深圳', '天津', '武汉', '西安', '成都', '南京', '杭州', '重庆', '厦门'] 22 23 24 #数据清洗: 25 26 def data_clear(): 27 28 for i in citys: 29 30 file_name = './' + i + '.csv' 31 df = pd.read_csv(file_name, index_col = 0) 32 33 for i in range(0, df.shape[0]): 34 35 s = df.loc[[i],['salary']].values.tolist()[0][0] 36 37 if re.search('(.*)-(.*)',s): 38 a = re.search('(.*)-(.*)', s).group(1) 39 if a[-1] == '千': 40 a = eval(a[0:-1]) * 1000 41 elif a[-1] == '万': 42 a = eval(a[0:-1]) * 10000 43 b = re.search('(.*)-(.*)', s).group(2) 44 if b[-1] == '千': 45 b = eval(b[0:-1]) * 1000 46 elif b[-1] == '万': 47 b = eval(b[0:-1]) * 10000 48 s = (a + b) / 2 49 df.loc[[i], ['salary']] = s 50 else: 51 df.loc[[i], ['salary']] = '' 52 53 os.remove(file_name) 54 df.to_csv(file_name) 55 56 57 58 #各个城市数据分析职位数量条形图: 59 60 def citys_jobs(): 61 62 job_num = list() 63 for i in citys: 64 file_name = './' + i + '.csv' 65 df = pd.read_csv(file_name, index_col = 0) 66 job_num.append(df.shape[0]) 67 df = pd.DataFrame(list(zip(citys, job_num))) 68 df = df.sort_values(1, ascending = False) 69 x = list(df[0]) 70 y = list(df[1]) 71 72 fig = plt.figure(dpi=200) 73 ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) 74 ax.bar(x,y,alpha = 0.8) 75 ax.set_title('数据分析职位在全国主要城市的数量分布') 76 ax.set_ylim(0,350) 77 78 plt.savefig('./数据分析职位在全国主要城市的数量分布.jpg') 79 plt.show() 80 81 82 #不同城市薪资分布条形图: 83 84 def citys_salary(): 85 86 y = list() 87 x = citys 88 89 for i in citys: 90 file_name = './' + i + '.csv' 91 df = pd.read_csv(file_name, index_col=0) 92 y0 = df['salary'].mean() 93 y.append(round(y0/1000, 1)) 94 95 df = pd.DataFrame(list(zip(x,y))) 96 df = df.sort_values(1, ascending = False) 97 x = list(df[0]) 98 y = list(df[1]) 99 100 fig = plt.figure(dpi=200) 101 ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) 102 ax.bar(x, y, alpha = 0.8) 103 ax.set_title('数据分析职位在一些主要城市的薪资分布(单位:千)') 104 ax.set_ylim(5, 18) 105 for a, b, label in zip(x, y, y): # 内置函数zip():将几个列表合并为二维列表并转置,返回一个特殊对象,可通过list()列表化之后查看 106 plt.text(a, b, label, horizontalalignment = 'center', fontsize = 10) # plt.text()函数:在图中(a,b)位置添加一个文字标签label 107 108 plt.savefig('./数据分析职位在一些主要城市的薪资分布.jpg') 109 plt.show() 110 111 112 #数据分析岗位总体薪资的分布 113 114 def salary_distribute(): 115 116 salary_list = list() 117 for i in citys: 118 file_name = './' + i + '.csv' 119 df = pd.read_csv(file_name, index_col = 0) 120 salary_list += list(df['salary']) 121 salarys = list() 122 for i in range(len(salary_list)): 123 if not pd.isnull(salary_list[i]): #由于该列表是从pandas中读出的数据,故不能用if salary_list[i] == np.nan,会识别不出来 124 salarys.append(round(salary_list[i]/1000, 1)) 125 mean = np.mean(salarys) 126 127 plt.figure(dpi=200) 128 sns.distplot(salarys, hist = True, kde = True, kde_kws={"color":"r", "lw":1.5, 'linestyle':'-'}) 129 plt.axvline(mean, color='r', linestyle=":") 130 plt.text(mean, 0.01, '平均薪资: %.1f千'%(mean), color='r', horizontalalignment = 'center', fontsize = 15) 131 plt.xlim(0,50) 132 plt.xlabel('薪资分布(单位:千)') 133 plt.title('数据分析职位整体薪资分布') 134 plt.savefig('./数据分析职位整体薪资分布.jpg') 135 plt.show() 136 137 138 #数据分析职位对学历要求的分布 139 140 def education_distribute(): 141 142 table = pd.DataFrame() 143 for i in citys: 144 file_name = './' + i + '.csv' 145 df = pd.read_csv(file_name, index_col=0) 146 table = pd.concat([table, df]) 147 table = pd.DataFrame(pd.value_counts(table['education'])) 148 table = table.sort_values(['education'], ascending = False) 149 x = list(table.index) 150 y = list(table['education']) 151 print(x) 152 153 fig = plt.figure(dpi=200) 154 ax = fig.add_axes([0.1,0.1,0.8,0.8]) 155 explode = (0, 0, 0, 0.2, 0.4, 0.6, 0.8) 156 ax.axis('equal') 157 ax.pie(y,labels = x,autopct='%.1f%%',explode=explode) #autopct显示每块饼的百分比属性且自定义格式化字符串,其中%%表示字符串%,类似正则 158 ax.set_title('数据分析职位对学历要求的占比') 159 ax.legend(x, loc = 1) 160 plt.savefig('./数据分析职位对学历要求的占比.jpg') 161 plt.show() 162 163 164 #技能关键词频统计 165 166 def wordfrequence(): 167 168 table = pd.DataFrame() 169 for i in citys: 170 file_name = './' + i + '.csv' 171 df = pd.read_csv(file_name, index_col=0) 172 table = pd.concat([table, df]) 173 l1 = list(table['ability']) 174 l2 = list() 175 for i in range(len(l1)): 176 if not pd.isnull(l1[i]): 177 l2.append(l1[i]) 178 words = ''.join(l2) 179 180 cloud = WordCloud( 181 font_path='/System/Library/Fonts/STHeiti Light.ttc', # 设置字体文件获取路径,默认字体不支持中文 182 background_color='white', # 设置背景颜色 默认是black 183 max_words=20, # 词云显示的最大词语数量 184 random_state = 1, # 设置随机生成状态,即多少种配色方案 185 collocations = False, # 是否包括词语之间的搭配,默认True,可能会产生语意重复的词语 186 width=1200, height=900 # 设置大小,默认图片比较小,模糊 187 ).generate(words) 188 plt.figure(dpi=200) 189 plt.imshow(cloud) # 该方法用来在figure对象上绘制传入图像数据参数的图像 190 plt.axis('off') # 设置词云图中无坐标轴 191 plt.savefig("./技能关键词频统计.jpg") 192 plt.show() 193 194 195 if __name__ == "__main__": 196 197 data_clear() 198 citys_jobs() 199 citys_salary() 200 salary_distribute() 201 wordfrequence()
五、总结(10 分)
1.经过对主题数据的分析与可视化,可以得到哪些结论?是否达到预期的目标?
结论:
结合上述数据显示,大数据分析师岗位资历越长,薪资越高。
综合上述数据可得,大数据分析师岗位入职基本信息。
普遍学历门槛为大专和本科,工作经验1-5年发展空间较大,5-10年为瓶颈期。
工作城市北京,广东,深圳,武汉,合肥,天津,重庆,郑州,沈阳,西安,成都,厦门,上海,南京,济南等一二线大型城市。
公司类型民营,股份制企业较多。
公司规模为小型,中大型互联网公司。
发展倾向:
专科,本科工作经验有3-5年,平均薪资12k以上
专科,本科工作经验有1-3年,平均薪资7k- 10k上下
专科,本科工作经验1年以下平均薪资在6K以下。
工作城市的选择也很重要,结合报表5 城市岗位数量及平均薪资报表,可得,杭州,广东,深圳属于高薪多岗位城市,北京,厦门,上海,济南是最理想的就业城市之一。