用Scrapy 从数据挖掘到监控和自动化测试

Scrapy 是一个 BSD 许可的快速高级网络爬虫和网络抓取框架,用于抓取网站并从其页面中提取结构化数据。它可以用于广泛的用途,从数据挖掘到监控和自动化测试。

安装scrapy

复制代码
pip install scrapy

爬虫示例

示例代码写入文件

python 复制代码
import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        "https://quotes.toscrape.com/tag/humor/",
    ]

    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                "author": quote.xpath("span/small/text()").get(),
                "text": quote.css("span.text::text").get(),
            }

        next_page = response.css('li.next a::attr("href")').get()
        if next_page is not None:
            yield response.follow(next_page, self.parse)

执行

python 复制代码
scrapy runspider quotes_spider.py -o quotes.jsonl

可以看到执行结果如下:

python 复制代码
scrapy runspider quotes_spider.py -o quotes.jsonl
2024-05-01 22:10:19 [scrapy.utils.log] INFO: Scrapy 2.11.1 started (bot: scrapybot)
2024-05-01 22:10:19 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.11.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0, Python 3.10.13 (main, Nov  9 2023, 03:04:43) [Clang 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386, pyOpenSSL 24.1.0 (OpenSSL 1.1.1t-freebsd  7 Feb 2023), cryptography 42.0.5, Platform FreeBSD-13.2-RELEASE-p10-amd64-64bit-ELF
2024-05-01 22:10:19 [scrapy.addons] INFO: Enabled addons:
[]
2024-05-01 22:10:19 [py.warnings] WARNING: /usr/home/skywalk/py310/lib/python3.10/site-packages/scrapy/utils/request.py:254: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting.

It is also the default value. In other words, it is normal to get this warning if you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. This is so for backward compatibility reasons, but it will change in a future version of Scrapy.

See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this deprecation.
  return cls(crawler)

2024-05-01 22:10:19 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.pollreactor.PollReactor
2024-05-01 22:10:19 [scrapy.extensions.telnet] INFO: Telnet Password: 18295d3f4c994eee
2024-05-01 22:10:19 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2024-05-01 22:10:19 [scrapy.crawler] INFO: Overridden settings:
{'SPIDER_LOADER_WARN_ONLY': True}
2024-05-01 22:10:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-05-01 22:10:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-05-01 22:10:20 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-05-01 22:10:20 [scrapy.core.engine] INFO: Spider opened

完成此操作后, quotes.jsonl 文件中将包含JSON行格式的引号列表,其中包含文本和作者,如下所示:

python 复制代码
{"author": "Jane Austen", "text": "\u201cThe person, be it gentleman or lady, who has not
 pleasure in a good novel, must be intolerably stupid.\u201d"}
{"author": "Steve Martin", "text": "\u201cA day without sunshine is like, you know, night
.\u201d"}
{"author": "Garrison Keillor", "text": "\u201cAnyone who thinks sitting in church can mak
e you a Christian must also think that sitting in a garage can make you a car.\u201d"}
{"author": "Jim Henson", "text": "\u201cBeauty is in the eye of the beholder and it may b
e necessary from time to time to give a stupid or misinformed beholder a black eye.\u201d
"}

监控

日志监控:Scrapy 提供了强大的日志系统,可以通过查看日志来监控爬虫的运行状态,也可以通过日志分析出被监控网站的运行状态。

自动化测试

Spider 编写测试,可以模拟 HTTP 响应,并验证 Spider 是否能够正确解析这些数据。

ps scapy是一个包处理软件。可以参考这篇文档学习:通过摆弄python scapy模块 了解网络模型--Get your hands dirty! - 知乎

相关推荐
深蓝电商API8 小时前
爬虫日志分析:快速定位被封原因
爬虫·python
是Dream呀9 小时前
自动化打造信息影响力:用 Web Unlocker 和 n8n 打造你的自动化资讯系统
运维·前端·爬虫·自动化
喵手12 小时前
Python爬虫实战:研究生招生简章智能采集系统 - 破解考研信息不对称的技术方案(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·采集研究生招生简章·考研信息不对称·采集考研信息数据csv导出
喵手13 小时前
Python爬虫实战:构建全球节假日数据库 - requests+lxml 实战时区节假日网站采集(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·构建全球节假日数据库·采集时区节假日数据·采集节假日sqlite存储
静谧空间13 小时前
linux安装Squid
linux·运维·爬虫
喵手13 小时前
Python爬虫实战:招聘会参会企业数据采集实战 - 分页抓取、去重与增量更新完整方案(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·增量·零基础python爬虫教学·招聘会参会企业数据采集·分页抓取去重
喵手14 小时前
Python爬虫实战:医院科室排班智能采集系统 - 从零构建合规且高效的医疗信息爬虫(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·医院科室排版智能采集系统·采集医疗信息·采集医疗信息sqlite存储
喵手14 小时前
Python爬虫实战:实现 Playwright 的动态名言“瀑布流”采集器,采集名言内容、作者及出处等信息(附 JSON 格式数据导出)!
爬虫·python·爬虫实战·playwright·零基础python爬虫教学·构建动态名言瀑布流采集器·采集数据json导出
喵手14 小时前
Python爬虫实战:全国旅游景区名录智能采集系统 - 构建文旅大数据的基石(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·全国旅游景区名采集系统·文旅大数据·采集旅游景区sqlite存储
J_bean16 小时前
AI 智能爬虫实战
爬虫·ai·大模型