【头歌】Scrapy爬虫(二)热门网站数据爬取

第1关:猫眼电影排行TOP100信息爬取

本关任务:爬取猫眼电影榜单TOP100榜 的100部电影信息保存到本地MySQL数据库。

相关知识

为了完成本关任务,你需要掌握:

MySQL相关知识(默认已掌握);

Scrapy settings.py文件设置的具体含义;

网站多页内容的爬取(翻页);

数据匹配详解。

step1/maoyan/maoyan/items.py

复制代码
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class MaoyanItem(scrapy.Item):

    #********** Begin **********#
    name = scrapy.Field()
    starts = scrapy.Field()
    releasetime = scrapy.Field()
    score = scrapy.Field()
    
    #********** End **********#

step1/maoyan/maoyan/pipelines.py

复制代码
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
from maoyan import settings
class MaoyanPipeline(object):
    def process_item(self, item, spider):
        #********** Begin **********#
        #1.连接数据库
        connection = pymysql.connect(
            host='localhost',  # 连接的是本地数据库
            port=3306,         #数据库端口名
            user='root',        # 自己的mysql用户名
            passwd='123123',  # 自己的密码
            db='mydb',      # 数据库的名字
            charset='utf8',     # 默认的编码方式
        )                    
        #2.建表、给表插入数据,完成后关闭数据库连接,return返回item
        name = item['name']
        starts = item['starts']
        releasetime = item['releasetime']
        score = item['score']
        try:
            with connection.cursor() as cursor:
                sql1 = 'Create Table If Not Exists mymovies(name varchar(50) CHARACTER SET utf8 NOT NULL,starts text CHARACTER SET utf8 NOT NULL,releasetime varchar(50) CHARACTER SET utf8 DEFAULT NULL,score varchar(20) CHARACTER SET utf8 NOT NULL,PRIMARY KEY(name))'
                # 单章小说的写入
                sql2 = 'Insert into mymovies values ('%s','%s','%s','%s')' % (name, starts, releasetime, score)
                cursor.execute(sql1)
                cursor.execute(sql2)
            # 提交本次插入的记录
            connection.commit()
        finally:
            # 关闭连接
            connection.close()    
            return item
        #********** End **********#

step1/maoyan/maoyan/spiders/movies.py

复制代码
# -*- coding: utf-8 -*-
import scrapy
from maoyan.items import MaoyanItem

class MoviesSpider(scrapy.Spider):
    name = 'movies'
    allowed_domains = ['127.0.0.1']
    offset = 0
    url = "http://127.0.0.1:8080/board/4?offset="
    #********** Begin **********#
    #1.对url进行定制,为翻页做准备
    start_urls = [url + str(offset)]
    #2.定义爬虫函数parse()
    def parse(self, response):
        item = MaoyanItem()
        movies = response.xpath("//div[ @class ='board-item-content']")
        for each in movies:
            #电影名
            name = each.xpath(".//div/p/a/text()").extract()[0]
            #主演明星
            starts = each.xpath(".//div[1]/p/text()").extract()[0]
            #上映时间
            releasetime = each.xpath(".//div[1]/p[3]/text()").extract()[0]
            score1 = each.xpath(".//div[2]/p/i[1]/text()").extract()[0]
            score2 = each.xpath(".//div[2]/p/i[2]/text()").extract()[0]
            #评分
            score = score1 + score2
            item['name'] = name
            item['starts'] = starts
            item['releasetime'] = releasetime
            item['score'] = score
            yield item
    #3.在函数的最后offset自加10,然后重新发出请求实现翻页功能
        if self.offset < 90:
            self.offset += 10
            yield scrapy.Request(self.url+str(self.offset), callback=self.parse)

    #********** End **********#
第2关:小说网站玄幻分类第一页小说爬取

本关任务:爬目标网页的3本小说保存到本

为了完成本关任务,你需要掌握:

xpath匹配:循环获取相同标签下的内容;

多个item类的处理;

深入二级页面的数据爬取。

地MySQL数据库,目标网页为全书网玄幻分类首页。

step2/NovelProject/NovelProject/items.py

复制代码
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy
#存放全部小说信息
class NovelprojectItem(scrapy.Item):
    #********** Begin **********#
    name = scrapy.Field()
    author = scrapy.Field()
    state = scrapy.Field()
    description = scrapy.Field()
    
    
    #********** End **********#

#单独存放小说章节
class NovelprojectItem2(scrapy.Item):
    #********** Begin **********#
    tablename = scrapy.Field()           # 表命名时需要
    title = scrapy.Field()
    
    #********** End **********#

step2/NovelProject/NovelProject/pipelines.py

复制代码
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
from NovelProject.items import NovelprojectItem,NovelprojectItem2
class NovelprojectPipeline(object):
    def process_item(self, item, spider):

        #********** Begin **********#
    
        #1.和本地的数据库mydb建立连接        
        connection = pymysql.connect(
            host='localhost',    # 连接的是本地数据库
            port = 3306,         # 端口号
            user='root',         # 自己的mysql用户名
            passwd='123123',     # 自己的密码
            db='mydb',           # 数据库的名字
            charset='utf8',      # 默认的编码方式:
        )
        
        
        #2.处理来自NovelprojectItem的item(处理完成后return返回item)
        if isinstance(item, NovelprojectItem):
            # 从items里取出数据
            name = item['name']
            author = item['author']
            state = item['state']
            description = item['description']
            try:
                with connection.cursor() as cursor:
                    # 小说信息写入
                    sql1 = 'Create Table If Not Exists novel(name varchar(20) CHARACTER SET utf8 NOT NULL,author varchar(10) CHARACTER SET utf8,state varchar(20) CHARACTER SET utf8,description text CHARACTER SET utf8,PRIMARY KEY (name))'
                    sql2 = 'Insert into novel values ('%s','%s','%s','%s')' % (name, author, state, description)
                    cursor.execute(sql1)
                    cursor.execute(sql2)
                # 提交本次插入的记录
                connection.commit()
            finally:
                # 关闭连接
                connection.close()
            return item       
        
        #3.处理来自NovelprojectItem2的item(处理完成后return返回item)
        elif isinstance(item, NovelprojectItem2):
            tablename = item['tablename']
            title = item['title']
            try:
                with connection.cursor() as cursor:
                    # 小说章节的写入
                    sql3 = 'Create Table If Not Exists %s(title varchar(20) CHARACTER SET utf8 NOT NULL,PRIMARY KEY (title))' % tablename
                    sql4 = 'Insert into %s values ('%s')' % (tablename, title)
                    cursor.execute(sql3)
                    cursor.execute(sql4)
                connection.commit()
            finally:
                connection.close()
            return item
        
        #********** End **********#

step2/NovelProject/NovelProject/spiders/novel.py

复制代码
# -*- coding: utf-8 -*-
import scrapy
import re
from scrapy.http import Request
from NovelProject.items import NovelprojectItem
from NovelProject.items import NovelprojectItem2

class NovelSpider(scrapy.Spider):
    name = 'novel'
    allowed_domains = ['127.0.0.1']
    start_urls = ['http://127.0.0.1:8000/list/1_1.html']   #全书网玄幻魔法类第一页

    #********** Begin **********#
    #1.定义函数,通过'马上阅读'获取每一本书的 URL
    def parse(self, response):
        book_urls = response.xpath('//li/a[@class="l mr10"]/@href').extract()
        three_book_urls = book_urls[0:3]  # 只取3本
        for book_url in three_book_urls:
            yield Request(book_url, callback=self.parse_read)
    #2.定义函数,进入小说简介页面,获取信息,得到后yield返回给pipelines处理,并获取'开始阅读'的url,进入章节目录
    def parse_read(self, response):
        item = NovelprojectItem()
        # 小说名字
        name = response.xpath('//div[@class="b-info"]/h1/text()').extract_first()
        #小说简介
        description = response.xpath('//div[@class="infoDetail"]/div/text()').extract_first()
        # 小说连载状态
        state = response.xpath('//div[@class="bookDetail"]/dl[1]/dd/text()').extract_first()
        # 作者名字
        author = response.xpath('//div[@class="bookDetail"]/dl[2]/dd/text()').extract_first()
        item['name'] = name
        item['description'] = description
        item['state'] = state
        item['author'] = author
        yield item
        # 获取开始阅读按钮的URL,进入章节目录
        read_url = response.xpath('//a[@class="reader"]/@href').extract()[0]
        yield Request(read_url, callback=self.parse_info)
    #3.定义函数,进入章节目录,获取小说章节名并yield返回
    def parse_info(self, response):
        item = NovelprojectItem2()
        tablename = response.xpath('//div[@class="main-index"]/a[3]/text()').extract_first()
        titles = response.xpath('//div[@class="clearfix dirconone"]/li')
        for each in titles:
            title = each.xpath('.//a/text()').extract_first()
            item['tablename'] = tablename
            item['title'] = title
            yield item

在测试行会输出你mydb数据库中表的个数,应该有4个表,预期输出为:

相关推荐
vx_biyesheji000131 分钟前
豆瓣电影推荐系统 | Python Django 协同过滤 Echarts可视化 深度学习 大数据 毕业设计源码
大数据·爬虫·python·深度学习·django·毕业设计·echarts
深蓝电商API1 小时前
爬虫IP封禁后的自动切换与检测机制
爬虫·python
喵手3 小时前
Python爬虫实战:公共自行车站点智能采集系统 - 从零构建生产级爬虫的完整实战(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·采集公共自行车站点·公共自行车站点智能采集系统·采集公共自行车站点导出csv
喵手3 小时前
Python爬虫实战:地图 POI + 行政区反查实战 - 商圈热力数据准备完整方案(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·地区poi·行政区反查·商圈热力数据采集
芷栀夏3 小时前
从 CANN 开源项目看现代爬虫架构的演进:轻量、智能与统一
人工智能·爬虫·架构·开源·cann
喵手19 小时前
Python爬虫实战:HTTP缓存系统深度实战 — ETag、Last-Modified与requests-cache完全指南(附SQLite持久化存储)!
爬虫·python·爬虫实战·http缓存·etag·零基础python爬虫教学·requests-cache
喵手19 小时前
Python爬虫实战:容器化与定时调度实战 - Docker + Cron + 日志轮转 + 失败重试完整方案(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·容器化·零基础python爬虫教学·csv导出·定时调度
喵手21 小时前
Python爬虫实战:全站 Sitemap 自动发现 - 解析 sitemap.xml → 自动生成抓取队列的工业级实现!
爬虫·python·爬虫实战·零基础python爬虫教学·sitemap·解析sitemap.xml·自动生成抓取队列实现
iFeng的小屋1 天前
【2026年新版】Python根据小红书关键词爬取所有笔记数据
笔记·爬虫·python
Love Song残响1 天前
揭秘Libvio爬虫:动态接口与逆向实战
爬虫