爬前叨叨
全站爬蟲有時候做起來其實比較容易,因為規則相對容易建立起來,只需要做好反爬就可以了,今天咱們爬取知乎。繼續使用scrapy
當然對於這個小需求來說,使用scrapy確實用了牛刀,不過畢竟本博客這個系列到這個階段需要不斷使用scrapy
進行過度,so,我寫了一會就寫完了。
你第一步找一個爬取種子,算作爬蟲入口
https://www.zhihu.com/people/zhang-jia-wei/following
我們需要的信息如下,所有的框圖都是我們需要的信息。
獲取用戶關注名單
通過如下代碼獲取網頁返回數據,會發現數據是由HTML+JSON拼接而成,增加了很多解析成本
class ZhihuSpider(scrapy.Spider):
name = 'Zhihu'
allowed_domains = ['www.zhihu.com']
start_urls = ['https://www.zhihu.com/people/zhang-jia-wei/following']
def parse(self, response):
all_data = response.body_as_unicode()
print(all_data)
首先配置一下基本的環境,比如間隔秒數,爬取的UA,是否存儲cookies,啟用隨機UA的中間件DOWNLOADER_MIDDLEWARES
middlewares.py
文件
from zhihu.settings import USER_AGENT_LIST # 導入中間件
import random
class RandomUserAgentMiddleware(object):
def process_request(self, request, spider):
rand_use = random.choice(USER_AGENT_LIST)
if rand_use:
request.headers.setdefault('User-Agent', rand_use)
setting.py
文件
BOT_NAME = 'zhihu'
SPIDER_MODULES = ['zhihu.spiders']
NEWSPIDER_MODULE = 'zhihu.spiders'
USER_AGENT_LIST=[ # 可以寫多個,測試用,寫了一個
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"
]
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
'zhihu.middlewares.RandomUserAgentMiddleware': 400,
}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zhihu.pipelines.ZhihuPipeline': 300,
}
主要爬取函數,內容說明
- start_requests 用來處理首次爬取請求,作為程序入口
- 下面的代碼主要處理了2種情況,一種是HTML部分,一種是JSON部分
- JSON部分使用re模塊進行匹配,在通過json模塊格式化
extract_first()
獲取xpath匹配數組的第一項dont_filter=False
scrapy URL去重
# 起始位置
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url.format("zhang-jia-wei"), callback=self.parse)
def parse(self, response):
print("正在獲取 {} 信息".format(response.url))
all_data = response.body_as_unicode()
select = Selector(response)
# 所有知乎用戶都具備的信息
username = select.xpath("//span[@class='ProfileHeader-name']/text()").extract_first() # 獲取用戶昵稱
sex = select.xpath("//div[@class='ProfileHeader-iconWrapper']/svg/@class").extract()
if len(sex) > 0:
sex = 1 if str(sex[0]).find("male") else 0
else:
sex = -1
answers = select.xpath("//li[@aria-controls='Profile-answers']/a/span/text()").extract_first()
asks = select.xpath("//li[@aria-controls='Profile-asks']/a/span/text()").extract_first()
posts = select.xpath("//li[@aria-controls='Profile-posts']/a/span/text()").extract_first()
columns = select.xpath("//li[@aria-controls='Profile-columns']/a/span/text()").extract_first()
pins = select.xpath("//li[@aria-controls='Profile-pins']/a/span/text()").extract_first()
# 用戶有可能設置了隱私,必須登錄之后看到,或者記錄cookie!
follwers = select.xpath("//strong[@class='NumberBoard-itemValue']/@title").extract()
item = ZhihuItem()
item["username"] = username
item["sex"] = sex
item["answers"] = answers
item["asks"] = asks
item["posts"] = posts
item["columns"] = columns
item["pins"] = pins
item["follwering"] = follwers[0] if len(follwers) > 0 else 0
item["follwers"] = follwers[1] if len(follwers) > 0 else 0
yield item
# 獲取第一頁關注者列表
pattern = re.compile('<script id=\"js-initialData\" type=\"text/json\">(.*?)<\/script>')
json_data = pattern.search(all_data).group(1)
if json_data:
users = json.loads(json_data)["initialState"]["entities"]["users"]
for user in users:
yield scrapy.Request(self.start_urls[0].format(user),callback=self.parse, dont_filter=False)
在獲取數據的時候,我繞開了一部分數據,這部分數據可以通過正則表達式去匹配。
數據存儲,采用的依舊是mongodb