python爬取快手视频--json数据分析


打开快手主页,进行页面分析
![在这里插入图片描述](https://img-blog.csdnimg.cn/20190823112841208.png?x-oss-
process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0RyZWFtX19fX0ZseQ==,size_16,color_FFFFFF,t_70)
对于快手这种平台,分析完页面代码之后,无任何想要的信息,所以,只能进行json数据的抓取,这些视频都是通过json语句传给前段,然后进行循环生成,所以,我们来看抓的json包
![在这里插入图片描述](https://img-blog.csdnimg.cn/20190823113342286.png?x-oss-
process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0RyZWFtX19fX0ZseQ==,size_16,color_FFFFFF,t_70)
然后进行详情页链接分析
![在这里插入图片描述](https://img-blog.csdnimg.cn/20190823113642716.png?x-oss-
process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0RyZWFtX19fX0ZseQ==,size_16,color_FFFFFF,t_70)
接下来看json数据
![在这里插入图片描述](https://img-blog.csdnimg.cn/2019082311385577.png?x-oss-
process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0RyZWFtX19fX0ZseQ==,size_16,color_FFFFFF,t_70)
补充一下,这里由于页面刷新了,所以看到的两个链接不一样,方法就是这样的
然后拼接出来二级路径,进行访问详情页
![在这里插入图片描述](https://img-blog.csdnimg.cn/20190823114315219.png?x-oss-
process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0RyZWFtX19fX0ZseQ==,size_16,color_FFFFFF,t_70)
最后在详情页按照常规方法进行分析页面爬取数据就行了
分享一下代码

```code
import requests
from bs4 import BeautifulSoup
import json
import time

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36',
}

def first_get_request(first_request):
first_data = json.loads(first_request.text)
print(first_data)
#进入第二层
first_two_data = first_data['data']['videoFeeds']['list']
for num in first_two_data:
two_url = 'https://live.kuaishou.com/u/' + num['user']['id'] + '/' + num['photoId']
# print(two_url)
two_get_request(two_url)


def two_get_request(two_url):
two_data = requests.get(url=two_url,headers=headers,verify=False)
soup = BeautifulSoup(two_data.text,'lxml')
#头像
name_photo = soup.select('.profile-user img')[0]['src']
#名字
name = soup.select('.video-card-footer-user-name')[0].text
#点赞量
number = soup.select('.profile-user-count-info > .watching-count')[0].text
#点心量
num = soup.select('.profile-user-count-info > .like-count')[0].text
#内容
text = soup.select('.profile-user > .profile-user-desc > span')[0].text
item = {
'头像':name_photo,
'名字':name,
'内容':text,
'点赞量':number,
'点心量':num
}
with open('爬取的信息.txt','a',encoding='utf8') as f:
f.write(str(item) + '\n')

time.sleep(3)

def main():
first_url = 'https://live.kuaishou.com/graphql'
formdata = {
"operationName": "videoFeedsQuery", "variables": {"count": 50, "pcursor": "50"},
"query": "fragment VideoMainInfo on VideoFeed {\n photoId\n caption\n thumbnailUrl\n poster\n viewCount\n likeCount\n commentCount\n timestamp\n workType\n type\n useVideoPlayer\n imgUrls\n imgSizes\n magicFace\n musicName\n location\n liked\n onlyFollowerCanComment\n width\n height\n expTag\n __typename\n}\n\nquery videoFeedsQuery($pcursor: String, $count: Int) {\n videoFeeds(pcursor: $pcursor, count: $count) {\n list {\n user {\n id\n eid\n profile\n name\n __typename\n }\n ...VideoMainInfo\n __typename\n }\n pcursor\n __typename\n }\n}\n"
}
#访问快手界面
first_request = requests.post(url=first_url,headers=headers,data=formdata,verify=False)
#分析首页链接
first_get_request(first_request)

if __name__ == '__main__':
main()
```

最后就找到了我们想要的东西,


![在这里插入图片描述](https://img-blog.csdnimg.cn/20210608151750993.gif)


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM