python爬虫进阶篇:Scrapy中使用Selenium+Firefox浏览器爬取国债逆回购并发送QQ邮件通知

一、前言

每到年底国债逆回购的利息都会来一波高涨,利息会比银行的T+0的理财产品的利息高,所以可以考虑写个脚本每天定时启动爬取逆回购数据,实时查看利息,然后在利息高位及时去下单。

二、环境搭建

详情请看《python爬虫进阶篇:Scrapy中使用Selenium模拟Firefox火狐浏览器爬取网页信息》

三、代码实现

  • items
python 复制代码
class BondSpiderItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 股票代码
    bond_code = scrapy.Field()
    # 股票名称
    bond_name = scrapy.Field()
    # 最新价
    last_price = scrapy.Field()
    # 涨跌幅
    rise_fall_rate = scrapy.Field()
    # 涨跌额
    rise_fall_price = scrapy.Field()
  • middlewares
python 复制代码
    def __init__(self):
        # ----------------firefox的设置------------------------------- #
        self.options = firefox_options()
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
        spider.driver = webdriver.Firefox(options=self.options)  # 指定使用的浏览器
        
    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called

        spider.driver.get(request.url)
        return None
        
    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        response_body = spider.driver.page_source

        return HtmlResponse(url=request.url, body=response_body, encoding='utf-8', request=request)
  • settings设置
python 复制代码
SPIDER_MIDDLEWARES = {
   'bond_spider.middlewares.BondSpiderSpiderMiddleware': 543,
}
DOWNLOADER_MIDDLEWARES = {
   'bond_spider.middlewares.BondSpiderDownloaderMiddleware': 543,
}
ITEM_PIPELINES = {
   'bond_spider.pipelines.BondSpiderPipeline': 300,
}
  • middlewares中间件
python 复制代码
from selenium.webdriver.firefox.options import Options as firefox_options


spider.driver = webdriver.Firefox(options=firefox_options())  # 指定使用的浏览器
  • spider文件
python 复制代码
    def parse(self, response):
        # 股票代码
        bond_code = response.css("table.table_wrapper-table tbody tr td:nth-child(2) a::text").extract()
        # 股票名称
        bond_name = response.css("table.table_wrapper-table tbody tr td:nth-child(3) a::text").extract()
        # 最新价
        last_price = response.css("table.table_wrapper-table tbody tr td:nth-child(4) span::text").extract()
        # 涨跌幅
        rise_fall_rate = response.css("table.table_wrapper-table tbody tr td:nth-child(6) span::text").extract()
        # 涨跌额
        rise_fall_price = response.css("table.table_wrapper-table tbody tr td:nth-child(5) span::text").extract()

        for i in range(len(bond_code)):
            item = BondSpiderItem()
            item["bond_code"] = bond_code[i]
            item["bond_name"] = bond_name[i]
            item["last_price"] = last_price[i]
            item["rise_fall_rate"] = rise_fall_rate[i]
            item["rise_fall_price"] = rise_fall_price[i]
            yield item

        print()

    def close(self, spider):
        spider.driver.quit()
  • pipelines持久化
python 复制代码
    def __init__(self):
        self.html = '<html><head><meta charset="utf-8"></head><body><table>'
        self.html = self.html + '<tr>'
        self.html = self.html + '<td>%s</td>' % "代码"
        self.html = self.html + '<td>%s</td>' % "名称"
        self.html = self.html + '<td>%s</td>' % "最新价"
        self.html = self.html + '<td>%s</td>' % "涨跌幅"
        self.html = self.html + '<td>%s</td>' % "涨跌额"
        self.html = self.html + '</tr>'

    def process_item(self, item, spider):
        self.html = self.html + '<tr>'
        self.html = self.html + '<td>%s</td>' % item["bond_code"]
        self.html = self.html + '<td>%s</td>' % item["bond_name"]
        self.html = self.html + '<td>%s</td>' % item["last_price"]
        self.html = self.html + '<td>%s</td>' % item["rise_fall_rate"]
        self.html = self.html + '<td>%s</td>' % item["rise_fall_price"]
        self.html = self.html + '</tr>'

        return item

    def close_spider(self, spider):
        self.html = self.html + '</table></body></html>'
        self.send_email(self.html)
        print()

    def send_email(self, html):
        # 设置邮箱账号
        account = "xxx"
        # 设置邮箱授权码
        token = "xxx"
        # 实例化smtp对象,设置邮箱服务器,端口
        smtp = smtplib.SMTP_SSL('smtp.qq.com', 465)

        # 登录qq邮箱
        smtp.login(account, token)

        # 添加正文,创建简单邮件对象
        email_content = MIMEText(html, 'html', 'utf-8')

        # 设置发送者信息
        email_content['From'] = 'xxx'
        # 设置接受者信息
        email_content['To'] = '技术总是日积月累的'
        # 设置邮件标题
        email_content['Subject'] = '来自code_space的一封信'

        # 发送邮件
        smtp.sendmail(account, 'xxx', email_content.as_string())
        # 关闭邮箱服务
        smtp.quit()

四、测试结果

相关推荐
小爬菜2 分钟前
Django学习笔记(项目默认文件)-02
前端·数据库·笔记·python·学习·django
Channing Lewis32 分钟前
python生成随机字符串
服务器·开发语言·python
资深设备全生命周期管理1 小时前
以Python 做服务器,N Robot 做客户端,小小UI,拿捏
服务器·python·ui
洪小帅1 小时前
Django 的 `Meta` 类和外键的使用
数据库·python·django·sqlite
夏沫mds1 小时前
web3py+flask+ganache的智能合约教育平台
python·flask·web3·智能合约
去往火星1 小时前
opencv在图片上添加中文汉字(c++以及python)
开发语言·c++·python
Bran_Liu2 小时前
【LeetCode 刷题】栈与队列-队列的应用
数据结构·python·算法·leetcode
懒大王爱吃狼3 小时前
Python绘制数据地图-MovingPandas
开发语言·python·信息可视化·python基础·python学习
数据小小爬虫3 小时前
如何使用Python爬虫按关键字搜索AliExpress商品:代码示例与实践指南
开发语言·爬虫·python
Python大数据分析@3 小时前
通俗的讲,网络爬虫到底是什么?
前端·爬虫·网络爬虫