失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > mysql 豆瓣 爬取豆瓣电影Top250并存入Mysql

mysql 豆瓣 爬取豆瓣电影Top250并存入Mysql

时间:2023-03-18 16:34:29

相关推荐

mysql 豆瓣 爬取豆瓣电影Top250并存入Mysql

最近使用scrapy爬取了豆瓣电影Top250的数据和图片保存到Mysql,给大家分享一下

创建scrapy startproject doubantop250

items.py

class DoubansqlItem(scrapy.Item):

# define the fields for your item here like:

moviename = scrapy.Field()

dbimgurl = scrapy.Field()

classname = scrapy.Field()

grade = scrapy.Field()

count = scrapy.Field()

introduction = scrapy.Field()

Pymysql插入数据库部分代码

Pipelines.py

class DoubansqlPipeline(object):

def __init__(self):

self.connect = pymysql.connect(

host=settings.MYSQL_HOST,

port=3306,

db=settings.MYSQL_DBNAME,

user=settings.MYSQL_USER,

passwd=settings.MYSQL_PASSWD,

charset='utf8',

use_unicode=True)

self.cursor = self.connect.cursor()

def process_item(self, item, spider):

self.cursor.execute(

"""insert into doub_doubdata(moviename,dbimgurl,classname,grade,count,introduction) values(%s,%s,%s,%s,%s,%s)""",

(item['moviename'],

item['dbimgurl'],

item['classname'],

item['grade'],

item['count'],

item['introduction']

))

# 执行sql语句,item里面定义的字段和表字段一一对应

mit()

# 提交

return item

在settings.py里设置数据库信息

ITEM_PIPELINES = {

'doubansql.pipelines.DoubansqlPipeline': 300,

}

MYSQL_HOST = 'localhost' # 数据库地址

MYSQL_DBNAME = 'doubandata' # 数据库名字

MYSQL_USER = 'root' # 数据库登录名

MYSQL_PASSWD = '1234567' # 数据库登录密码

创建doubanmovie.py开始爬取

class DoubanmovieSpider(scrapy.Spider):

name = 'doubanmovie'

allowed_domains = ['']

start_urls = ['/top250']

def parse(self, response):

item = DoubansqlItem()

Movies = response.xpath('//*[@id="content"]/div/div[1]/ol/li')

for eachMovie in Movies:

moviename = eachMovie.xpath('div/div[2]/div[1]/a/span[1]/text()').extract_first()

dbimgurl = eachMovie.xpath('div/div[1]/a/img/@src').extract_first()

classname = eachMovie.xpath('div/div[2]/div[2]/p[2]/span/text()').extract_first()

grade = eachMovie.xpath('div/div[2]/div[2]/div/span[2]/text()').extract_first()

count = eachMovie.xpath('div/div[2]/div[2]/div/span[4]/text()').extract_first()

introduction = eachMovie.xpath('div/div[2]/div[2]/p[1]/text()').extract_first()

if introduction:

introduction = introduction[0]

else:

introduction = ''

filename = moviename + '.jpg'

dirpath = './cover'

if not os.path.exists(dirpath):

os.makedirs(dirpath)

filepath = os.path.join(dirpath, filename)

urllib.request.urlretrieve(dbimgurl, filepath)

cover = 'cover/' + filename

item['moviename'] = moviename

item['dbimgurl'] = cover

item['classname'] = classname

item['grade'] = grade

item['count'] = count

item['introduction'] = introduction

yield item

# 解析下一页规则,取后一页的xpath

next_link = response.xpath("//span[@class='next']/link/@href").extract()

if next_link:

next_link = next_link[0]

yield scrapy.Request('/top250' + next_link, callback=self.parse)

想要看代码的可以点以下链接

代码下载

如果觉得《mysql 豆瓣 爬取豆瓣电影Top250并存入Mysql》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。