失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy

Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy

时间:2020-12-16 17:57:04

相关推荐

Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy

1. 海王评论数据爬取前分析

海王上映了,然后口碑炸了,对咱来说,多了一个可爬可分析的电影,美哉~

摘录一个评论

零点场刚看完,温导的电影一直很不错,无论是速7,电锯惊魂还是招魂都很棒。打斗和音效方面没话说非常棒,特别震撼。总之,DC扳回一分( ̄▽ ̄)。比正义联盟好的不止一点半点(我个人感觉)。还有艾梅伯希尔德是真的漂亮,温导选的人都很棒。 真的第一次看到这么牛逼的电影 转场特效都吊炸天

2. 海王案例开始爬取数据

数据爬取的依旧是猫眼的评论,这部分内容咱们用把牛刀,scrapy爬取,一般情况下,用一下requests就好了

抓取地址

/mmdb/comments/movie/249342.json?_v_=yes&offset=15&startTime=-12-11%%3A58%3A43复制代码

关键参数

url:/mmdb/comments/movie/249342.jsonoffset:15startTime:起始时间复制代码

scrapy 爬取猫眼代码特别简单,我分开几个py文件即可。

Haiwang.py

import scrapyimport jsonfrom haiwang.items import HaiwangItemclass HaiwangSpider(scrapy.Spider):name = 'Haiwang'allowed_domains = ['']start_urls = ['/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime=0']def parse(self, response):print(response.url)body_data = response.body_as_unicode()js_data = json.loads(body_data)item = HaiwangItem()for info in js_data["cmts"]:item["nickName"] = info["nickName"]item["cityName"] = info["cityName"] if "cityName" in info else ""item["content"] = info["content"]item["score"] = info["score"]item["startTime"] = info["startTime"]item["approve"] = info["approve"]item["reply"] = info["reply"]item["avatarurl"] = info["avatarurl"]yield itemyield scrapy.Request("/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime={}".format(item["startTime"]),callback=self.parse)复制代码

setting.py

设置需要配置headers

DEFAULT_REQUEST_HEADERS = {"Referer":"/movie/249342/comments?_v_=yes","User-Agent":"Mozilla/5.0 Chrome/63.0.3239.26 Mobile Safari/537.36","X-Requested-With":"superagent"}复制代码

需要配置一些抓取条件

# Obey robots.txt rulesROBOTSTXT_OBEY = False# See also autothrottle settings and docsDOWNLOAD_DELAY = 1# Disable cookies (enabled by default)COOKIES_ENABLED = False复制代码

开启管道

# Configure item pipelines# See /en/latest/topics/item-pipeline.htmlITEM_PIPELINES = {'haiwang.pipelines.HaiwangPipeline': 300,}复制代码

items.py获取你想要的数据

import scrapyclass HaiwangItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()nickName = scrapy.Field()cityName = scrapy.Field()content = scrapy.Field()score = scrapy.Field()startTime = scrapy.Field()approve = scrapy.Field()reply =scrapy.Field()avatarurl = scrapy.Field()复制代码

pipelines.py保存数据,数据存储到csv文件中

import osimport csvclass HaiwangPipeline(object):def __init__(self):store_file = os.path.dirname(__file__) + '/spiders/haiwang.csv'self.file = open(store_file, "a+", newline="", encoding="utf-8")self.writer = csv.writer(self.file)def process_item(self, item, spider):try:self.writer.writerow((item["nickName"],item["cityName"],item["content"],item["approve"],item["reply"],item["startTime"],item["avatarurl"],item["score"]))except Exception as e:print(e.args)def close_spider(self, spider):self.file.close()复制代码

begin.py编写运行脚本

from scrapy import cmdlinecmdline.execute(("scrapy crawl Haiwang").split())复制代码

走起,搞定,等着数据来到,就可以了

如果觉得《Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。