python数据分析——网络爬虫和API

HTML

什么是超文本标记语言?

它包含由标签标记的多层内容,包括开始标签和带有'/'的结束标签

"head":用于浏览器特定信息

"style":层叠样式表(CSS)用于设置HTML页面的样式

"body":用于可见内容

html 复制代码
<html>
<head>
	<title>My First Web Page</title>
	<style>
		.highlight {
			background-color: lightblue;
		}
		div.section h1 {
			color: red;
			font-size: 24px;
		}
		#important {
			font-weight: bold;
		}
		</style>
</head>
<body>
	<h1>This is my first web page!</h1>
	<p>I'm excited to learn HTML.</p>
</html>

数据检索

  1. 使用requests.get()下载整个网页的全部内容

  2. 使用BeautifulSoup导航并提取精确信息(位于开始标签和结束标签之间)

  3. 逐步遍历网页的层次结构从而到达目标位置

python 复制代码
import requests
from bs4 import BeautifulSoup

# store the response from the website in a varieble
response = requests.get("https://en.wikipedia.org/wiki/...")
# extract the actual content of the web page
content = response.content

# create a BeautifulSoup object
# 'html.parser' is the default parser for BeautifulSoup
soup = BeautifulSoup(content, 'html.parser')

# get the head element of BeautifulSoup object
body = soup.head
# get the title element of the head element
t = body.title

print(t.text)

选取元素

find_all()

定位并提取网页中所有带有某一tag的元素

  • id: unique
python 复制代码
# find_all() with an id attribute passed
links = soup.find_all('li', id = "toc-Computing")

for link in links:
	href = link.get('href') # get the URL
	text = link.text
	print(f"URL: {href}\\nText: {text}\\n")
  • class_: not unique
python 复制代码
import requests
from bs4 import BeautifulSoup
import pandas as pd

url = "https://en.wikipedia.org/wiki/List_of_Nobel_laureates"

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

# scrape a table from web page
# find an HTML table with the specified class_ attribute
table = soup.find("table", class_ = "wikitable sortable")

# read the HTML table into a DataFrame
# select the first DataFrame from the list
df = pd.read_htnl(str(table))[0]

CSS selectors

  • tag selector
  • .class selector
  • #ID selector
python 复制代码
# select all elements with the class "highlight"
highlighted_elements = soup.select(".highlight")
for element in highlighted_elements:
	print("Highlighted Element Text:", element.text)

# select <h2> elements inside a <div> with class "section"	
section_h2_elements = soup.select("div.section h2")
for element in section_h2_elements:
	print("Section <h2> Text:", element.text)
	
# select the element with the ID "important"
important_elements = soup.select("#important")
print("Important Element Text:", important_elements[0].text)

API

它是一种软件组件之间相互交互的方式

它可以用来从外部源(如数据库、Web服务和云存储)提取数据

获取一个API密钥来向API发送请求

端点(endpoint)

一个用于从API访问特定资源或功能的URL

Google Maps API

/geocode/json: get the latitude and longitude of a given address

/directions/json: get directions between two points

/places/nearby: get a list of places nearby a given location
GitHub API

/users/{username}/repos: get a list of repositories for a specified user

/repos/{owner}/{repo}/commits: get a list of commits for a repository

/repos/{owner}/{repo}/issues: get a list issues for a repository
OpenWeatherMap API

/weather: get current weather data for a specified location

/forecast/hourly: get hourly weather forecast for 4 days of a specified location

/history/city: get hourly historical weather data for specified location

向API发送请求

使用HTTP客户端:一个可以发送和接收HTTP请求的软件应用程序

requests.get(url):向URL发送HTTP请求,并从API端点检索数据,其中URL作为参数传入

python 复制代码
import requests

url = 'https://api.openweathermap.org/data/2.5/weather?q=Singapore&APPID=YOUR_API_KEY'

response = requests.get(url)

# check whether the request is successful
# 200 indicates successful
# 400 indicates the request was invalid
# 500 indicates an error occureed on the surver
if response.status_code == 200:
	# convert response content into a dictionary or list
	weather_data = response.json() # output
	
	weather_description = weather_data['weather'][0]['description']
	print(weather_description)
else:
	print("An error occurred.")

JSON

JavaScript Object Notation

  • 一种轻量级数据交换格式(无需额外标签)
  • 基于文本,且与平台无关,在不同应用程序之间交换数据时非常流行
  • 由对象({})和数组([])组成,以层次化的树状格式进行结构化

response data

python 复制代码
import json
import requests
import pandas as pd

url = 'https://api.openweathermap.org/data/2.5/weather?q=Singapore&APPID=YOUR_API_KEY'

response = requests.get(url)

# parse the JSON response data into a dictionary
weather_data = json.loads(response.content)

# convert JSON data into a DataFrame
df = pd.json_normalize(weather_data)
print(pd)

pass URL parameters

  • 在?后添加URL参数
  • q=value: 特定的查询条件
  • appid=your_api_key: 传入你的API key
  • &: 不同参数用&分隔开
python 复制代码
import requests

URL = 'https://api.openweathermap.org/data/2.5/weather'

# parameters are stored in key-value pair structure
PARAMETERS = {
		"q": "Singapore",
		"appid": "your_api_key",
		"units": "imperial"
}

# params require a dictionary
response = requests.get(URL, params=PARAMETERS)

# parse the JSON response data into a dictionary
data = response.json()

# convert temperature form Kelvins to Celsius
temperature = data['main']['temp'] - 273.15

print(f"The current temperature in Singapore is {round(temperatire,2)} degrees Celsius.")
相关推荐
万邦科技Lafite4 分钟前
利用淘宝开放API接口监控商品状态,掌握第一信息
大数据·python·电商开放平台·开放api接口·淘宝开放平台
Hy行者勇哥2 小时前
Python 与 VS Code 结合操作指南
开发语言·python
大力水手(Popeye)2 小时前
Pytorch——tensor
人工智能·pytorch·python
飞翔的佩奇6 小时前
【完整源码+数据集+部署教程】表盘指针检测系统源码和数据集:改进yolo11-CA-HSFPN
python·yolo·计算机视觉·数据集·yolo11·表盘指针检测
larance7 小时前
SQLAlchemy 的异步操作来批量保存对象列表
数据库·python
搏博7 小时前
基于Python3.10.6与jieba库的中文分词模型接口在Windows Server 2022上的实现与部署教程
windows·python·自然语言处理·flask·中文分词
lxmyzzs8 小时前
pyqt5无法显示opencv绘制文本和掩码信息
python·qt·opencv
萧鼎9 小时前
Python pyzmq 库详解:从入门到高性能分布式通信
开发语言·分布式·python
样子201810 小时前
Uniapp 之renderjs解决swiper+多个video卡顿问题
前端·javascript·css·uni-app·html
yujkss10 小时前
Python脚本每天爬取微博热搜-终版
开发语言·python