爬取前程無憂網站上python的招聘信息。


本文獲取的字段有為職位名稱,公司名稱,公司地點,薪資,發布時間

創建爬蟲項目

scrapy startproject qianchengwuyou

cd qianchengwuyou

scrapy genspider -t crawl qcwy www.xxx.com

items中定義爬取的字段

import scrapy


class QianchengwuyouItem(scrapy.Item):
    # define the fields for your item here like:
    job_title = scrapy.Field()
    company_name = scrapy.Field()
    company_address = scrapy.Field()
    salary = scrapy.Field()
    release_time = scrapy.Field()

qcwy.py文件內寫主程序

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from qianchengwuyou.items import QianchengwuyouItem

class QcwySpider(CrawlSpider):
    name = 'qcwy'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://search.51job.com/list/000000,000000,0000,00,9,99,python,2,1.html?']
    # https://search.51job.com/list/000000,000000,0000,00,9,99,python,2,7.html?lang=c&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare=
    rules = (
        Rule(LinkExtractor(allow=r'https://search.51job.com/list/000000,000000,0000,00,9,99,python,2,(\d+).html?'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):

        list_job = response.xpath('//div[@id="resultList"]/div[@class="el"][position()>1]')
        for job in list_job:
            item = QianchengwuyouItem()
            item['job_title'] = job.xpath('./p/span/a/@title').extract_first()
            item['company_name'] = job.xpath('./span[1]/a/@title').extract_first()
            item['company_address'] = job.xpath('./span[2]/text()').extract_first()
            item['salary'] = job.xpath('./span[3]/text()').extract_first()
            item['release_time'] = job.xpath('./span[4]/text()').extract_first()
            yield item

pipelines.py文件中寫下載規則

import pymysql

class QianchengwuyouPipeline(object):
    conn = None
    mycursor = None

    def open_spider(self, spider):
        print('鏈接數據庫...')
        self.conn = pymysql.connect(host='172.16.25.4', user='root', password='root', db='scrapy')
        self.mycursor = self.conn.cursor()

    def process_item(self, item, spider):
        print('正在寫數據庫...')
        job_title = item['job_title']
        company_name = item['company_name']
        company_address = item['company_address']
        salary = item['salary']
        release_time = item['release_time']
        sql = 'insert into qcwy VALUES (null,"%s","%s","%s","%s","%s")' % (
            job_title, company_name, company_address, salary, release_time)
        bool = self.mycursor.execute(sql)
        self.conn.commit()
        return item

    def close_spider(self, spider):
        print('寫入數據庫完成...')
        self.mycursor.close()
        self.conn.close()

settings.py文件中打開下載管道和請求頭

ITEM_PIPELINES = {
   'qianchengwuyou.pipelines.QianchengwuyouPipeline': 300,
}
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2'

運行爬蟲,同時寫入.json文件

scrapy crawl qcwy -o qcwy.json --nolog

查看數據庫是否寫入成功,

done.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM