python爬虫基础实验:通过DBLP数据库获取数据挖掘顶会KDD在2023年的论文收录和相关作者信息

Task1

读取网站主页整个页面的 html 内容并解码为文本串(可使用urllib.request的相应方法),将其以UTF-8编码格式写入page.txt文件。

Code1

python 复制代码
import urllib.request

with urllib.request.urlopen('https://dblp.dagstuhl.de/db/conf/kdd/kdd2023.html') as response:
    html = response.read()

html_text = html.decode()

with open('page.txt','w',encoding='utf-8') as f:
    f.write(html_text)

Task2

打开page.txt文件,观察 Track 名称、论文标题等关键元素的组成规律。从这个文本串中提取各Track 的名称并输出(可利用字符串类型的split()和strip()方法)。

Code2

python 复制代码
import re

with open('page.txt', 'r', encoding='utf-8') as f:
    content = f.read()

# 使用正则表达式找到所有的 <h2 id="*"> 和 </h2> 之间的字符串
matches = re.findall(r'<h2 id=".*?">(.*?)</h2>', content)

for match in matches:
    print(match)

Task3

可以看到, "Research Track Full Papers" 和 "Applied Data Track Full Papers" 中的论文占据了绝大多数,现欲提取这两个 Track 下的所有论文信息(包含作者列表authors、论文标题title、收录起始页startPage与终止页endPage),并按照以下格式存储到一个字典列表中,同时输出这两个 Track 各自包含的论文数量,然后把字典列表转化为 json 对象(可使用json包的相应方法),并以 2 字符缩进的方式写入kdd23.json文件中。

python 复制代码
[
  {
    "track": "Research Track Full Papers",
    "papers": [
      {
        "authors": [
          "Florian Adriaens",
          "Honglian Wang",
          "Aristides Gionis"
        ],
        "title": "Minimizing Hitting Time between Disparate Groups with Shortcut Edges.",
        "startPage": "1",
        "endPage": "10"
      },
      ...
     ]
   }
   {
    "track": "Applied Data Track Full Papers",
    "papers": [
      {
        "authors": [
          "Florian Adriaens",
          "Honglian Wang",
          "Aristides Gionis"
        ],
        "title": "Minimizing Hitting Time between Disparate Groups with Shortcut Edges.",
        "startPage": "1",
        "endPage": "10"
      },
      ...
     ]
   }
]

Code3

python 复制代码
import re
import json

with open('page.txt', 'r', encoding='utf-8') as f:
    content = f.read()

# 定义一个列表来存储 Track 信息
tracks = []

# 定义正则表达式
track_pattern = re.compile(r'<h2 id=".*?">(.*?)</h2>')
author_pattern = re.compile(r'<span itemprop="name" title=".*?">(.*?)</span>')
title_pattern = re.compile(r'<span class="title" itemprop="name">(.*?)</span>')
page_pattern = re.compile(r'<span itemprop="pagination">(.*?)-(.*?)</span>')

# 找到 "Research Track Full Papers" 和 "Applied Data Science Track Full Papers" 的位置
start1 = content.find('Research Track Full Papers') - 50
start2 = content.find('Applied Data Track Full Papers') - 50
start3 = content.find('Hands On Tutorials') - 1
end = len(content)

# 从整篇文本中划分出前两个Track中所有相邻"<cite"和"</cite>"之间的内容(即一篇文章的范围)
research_papers_content = re.split('<cite', content[start1:start2])[1:]
applied_papers_content = re.split('<cite', content[start2:start3])[1:]

def extract_paper_info(papers_content):
    papers = []
    for paper_content in papers_content:
        paper_content = re.split('</cite>', paper_content)[0]
        papers.append(paper_content)
    return papers
        
spit_research_content = extract_paper_info(research_papers_content)
spit_applied_content = extract_paper_info(applied_papers_content)

# 提取每篇paper的author、title和startPage, endPage
def extract_paper_info(papers_content):
    papers = []
    for paper_content in papers_content:
        authors = author_pattern.findall(paper_content)
        titles = title_pattern.findall(paper_content)
        pages = page_pattern.search(paper_content)
        startPage, endPage = pages.groups()
        papers.extend([{'authors': authors, 'title': title , 'startPage': startPage , 'endPage': endPage} for title in titles])
    return papers

# 提取 "Research Track Full Papers" 的论文信息
research_track = track_pattern.search(content[start1:start2]).group(1)
research_papers = extract_paper_info(spit_research_content)

# 提取 "Applied Data Science Track Full Papers" 的论文信息
applied_track = track_pattern.search(content[start2:start3]).group(1)
#applied_papers = extract_paper_info(spit_applied_content)
applied_papers = extract_paper_info(spit_applied_content)
# 将论文信息存储到字典列表中
tracks.append({'track': research_track, 'papers': research_papers})
tracks.append({'track': applied_track, 'papers': applied_papers})

# 将字典列表转换为 JSON 并写入文件
with open('kdd23.json', 'w', encoding='utf-8') as f:
    json.dump(tracks, f, indent=2)

Task4

基于之前爬取的页面文本,分别针对这两个 Track 前 10 篇论文的所有相关作者,爬取他们的以下信息:(1)该研究者的学术标识符orcID(有多个则全部爬取);(2)该研究者从 2020 年至今发表的所有论文信息(包含作者authors、标题title、收录信息publishInfo和年份year)。将最终结果转化为 json 对象,并以 2 字符缩进的方式写入researchers.json文件中,相应存储格式为:

python 复制代码
[
  {
    "researcher": "Florian Adriaens",
    "orcID": [
      "0000-0001-7820-6883"
    ],
    "papers": [
      {
        "authors": [
          "Florian Adriaens",
          "Honglian Wang",
          "Aristides Gionis"
        ],
        "title": "Minimizing Hitting Time between Disparate Groups with Shortcut Edges.",
        "publishInfo": "KDD 2023: 1-10",
        "year": 2023
      },
     ...
    ]
  },
  ...
]   

Code4

python 复制代码
import re
import requests
import json
import time
import random

# 打开并读取 "page.txt" 文件
with open('page.txt', 'r', encoding='utf-8') as f:
    content = f.read()

# 定义正则表达式
author_link_pattern = re.compile(r'<span itemprop="author" itemscope itemtype="http://schema.org/Person"><a href="(.*?)" itemprop="url">')
orcID_pattern = re.compile(r'<img alt="" src="https://dblp.dagstuhl.de/img/orcid.dark.16x16.png" class="icon">(.{19})</a></li>')
researcher_pattern = re.compile(r'<head><meta charset="UTF-8"><title>dblp: (.*?)</title>')
year_pattern = re.compile(r'<span itemprop="datePublished">(.*?)</span>')

# 找到 "Research Track Full Papers" 和 "Applied Data Track Full Papers" 的位置
start1 = content.find('Research Track Full Papers')
start2 = content.find('Applied Data Track Full Papers')
end = len(content)

# 提取这两个部分的内容,并找到前 10 个 "persistent URL:" 之间的内容
research_papers_content = content[start1:start2].split('<cite')[1:11]
applied_papers_content = content[start2:end].split('<cite')[1:11]

def extract_paper_info(papers_content):
    papers = []
    for paper_content in papers_content:
        paper_content = re.split('</cite>', paper_content)[0]
        papers.append(paper_content)
    return papers
        
spit_research_content = extract_paper_info(research_papers_content)
spit_applied_content = extract_paper_info(applied_papers_content)

def extract_paper_info2(paper_content):
    final_result = []
    # 使用正则表达式找到所有在 "<>" 之外的字符串
    outside_brackets = re.split(r'<[^>]*>', paper_content)
    # 遍历提取到的内容,删除含有'http'的字符串及其前面的字符串
    flag = -1
    for i in range(len(outside_brackets)):
        if 'http' in outside_brackets[i]:
            flag = i
    for i in range(flag + 1 , len(outside_brackets)):
        if outside_brackets[i]:
            final_result.append(outside_brackets[i])
    return final_result

# 定义一个列表来存储研究者信息
researchers = []

# 访问每篇文章里所有作者的链接,获取作者的 orcID 和论文信息
for papers in [research_papers_content, applied_papers_content]:
    for paper in papers:
        author_links = author_link_pattern.findall(paper)
        for link in author_links:
            link_content = requests.get(link)
            response = link_content.text
            
            #爬虫时频繁请求服务器,可能会被网站认定为攻击行为并报错"ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接",故采取以下两个措施
            #使用完后关闭响应
            link_content.close()  
            # 在各个请求之间添加随机延时等待
            time.sleep(random.randint(1, 3))
            
            researcher = researcher_pattern.search(response).group(1)
            orcID = orcID_pattern.findall(response)
            
            # 找到 "<li class="underline" title="jump to the 2020s">" 和 "<li class="underline" title="jump to the 2010s">" 之间的内容
            start = response.find('2020 &#8211; today')
            end = response.find('<header id="the2010s" class="hide-head h2">')
            # 提取这部分的内容,并找到所有 "</cite>" 之间的内容
            papers_content = response[start:end].split('</cite>')[0:-1]
            
            papers_dict = []
            
            for paper_content in papers_content:
                spit_content = extract_paper_info2(paper_content)
                year = int(year_pattern.search(paper_content).group(1))
                authors = []
                publishInfo = []
                for i in range(0 , len(spit_content) - 1):
                    if spit_content[i] != ", " and (spit_content[i+1] == ", " or spit_content[i+1] == ":"):
                        authors.append(spit_content[i])
                    elif spit_content[i][-1] == '.':
                        title = spit_content[i]
                        for k in range(i+2 , len(spit_content)):
                            publishInfo.append(spit_content[k])
                # 创建一个新的字典来存储每篇文章的信息
                paper_dict = {'authors': authors, 'title': title, 'publishInfo': ''.join(publishInfo), 'year': year}
                papers_dict.append(paper_dict)
            researchers.append({'researcher': researcher, 'orcID': orcID, 'papers': papers_dict})

# 将字典列表转换为 JSON 并写入 "researchers.json" 文件
with open('researchers.json', 'w', encoding='utf-8') as f:
    json.dump(researchers, f, indent=2)
相关推荐
comli_cn24 分钟前
使用清华源安装python包
开发语言·python
赵谨言35 分钟前
基于python 微信小程序的医院就诊小程序
经验分享·python·毕业设计
蹉跎x36 分钟前
力扣1358. 包含所有三种字符的子字符串数目
数据结构·算法·leetcode·职场和发展
1.01^10001 小时前
[1111].集成开发工具Pycharm安装与使用
python·pycharm
HEX9CF1 小时前
【Django】测试带有 CSRF 验证的 POST 表单 API 报错:Forbidden (CSRF cookie not set.)
python·django·csrf
凡人的AI工具箱2 小时前
每天40分玩转Django:实操多语言博客
人工智能·后端·python·django·sqlite
Py办公羊大侠2 小时前
Excel批量设置行高,Excel表格设置自动换行后打印显示不全,Excel表格设置最合适的行高后打印显示不全,完美解决方案!!!
python·excel·打印·openpyxl·自动换行·显示不全
PieroPc2 小时前
Python tkinter写的《电脑装配单》和 Html版 可打印 可导出 excel 文件
python·html·电脑
巫师不要去魔法部乱说2 小时前
PyCharm专项训练4 最小生成树算法
算法·pycharm
Cachel wood2 小时前
Django REST framework (DRF)中的api_view和APIView权限控制
javascript·vue.js·后端·python·ui·django·前端框架