python数据分析——网络爬虫和API

HTML

什么是超文本标记语言?

它包含由标签标记的多层内容,包括开始标签和带有'/'的结束标签

"head":用于浏览器特定信息

"style":层叠样式表(CSS)用于设置HTML页面的样式

"body":用于可见内容

html 复制代码
<html>
<head>
	<title>My First Web Page</title>
	<style>
		.highlight {
			background-color: lightblue;
		}
		div.section h1 {
			color: red;
			font-size: 24px;
		}
		#important {
			font-weight: bold;
		}
		</style>
</head>
<body>
	<h1>This is my first web page!</h1>
	<p>I'm excited to learn HTML.</p>
</html>

数据检索

  1. 使用requests.get()下载整个网页的全部内容

  2. 使用BeautifulSoup导航并提取精确信息(位于开始标签和结束标签之间)

  3. 逐步遍历网页的层次结构从而到达目标位置

python 复制代码
import requests
from bs4 import BeautifulSoup

# store the response from the website in a varieble
response = requests.get("https://en.wikipedia.org/wiki/...")
# extract the actual content of the web page
content = response.content

# create a BeautifulSoup object
# 'html.parser' is the default parser for BeautifulSoup
soup = BeautifulSoup(content, 'html.parser')

# get the head element of BeautifulSoup object
body = soup.head
# get the title element of the head element
t = body.title

print(t.text)

选取元素

find_all()

定位并提取网页中所有带有某一tag的元素

  • id: unique
python 复制代码
# find_all() with an id attribute passed
links = soup.find_all('li', id = "toc-Computing")

for link in links:
	href = link.get('href') # get the URL
	text = link.text
	print(f"URL: {href}\\nText: {text}\\n")
  • class_: not unique
python 复制代码
import requests
from bs4 import BeautifulSoup
import pandas as pd

url = "https://en.wikipedia.org/wiki/List_of_Nobel_laureates"

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

# scrape a table from web page
# find an HTML table with the specified class_ attribute
table = soup.find("table", class_ = "wikitable sortable")

# read the HTML table into a DataFrame
# select the first DataFrame from the list
df = pd.read_htnl(str(table))[0]

CSS selectors

  • tag selector
  • .class selector
  • #ID selector
python 复制代码
# select all elements with the class "highlight"
highlighted_elements = soup.select(".highlight")
for element in highlighted_elements:
	print("Highlighted Element Text:", element.text)

# select <h2> elements inside a <div> with class "section"	
section_h2_elements = soup.select("div.section h2")
for element in section_h2_elements:
	print("Section <h2> Text:", element.text)
	
# select the element with the ID "important"
important_elements = soup.select("#important")
print("Important Element Text:", important_elements[0].text)

API

它是一种软件组件之间相互交互的方式

它可以用来从外部源(如数据库、Web服务和云存储)提取数据

获取一个API密钥来向API发送请求

端点(endpoint)

一个用于从API访问特定资源或功能的URL

Google Maps API

/geocode/json: get the latitude and longitude of a given address

/directions/json: get directions between two points

/places/nearby: get a list of places nearby a given location
GitHub API

/users/{username}/repos: get a list of repositories for a specified user

/repos/{owner}/{repo}/commits: get a list of commits for a repository

/repos/{owner}/{repo}/issues: get a list issues for a repository
OpenWeatherMap API

/weather: get current weather data for a specified location

/forecast/hourly: get hourly weather forecast for 4 days of a specified location

/history/city: get hourly historical weather data for specified location

向API发送请求

使用HTTP客户端:一个可以发送和接收HTTP请求的软件应用程序

requests.get(url):向URL发送HTTP请求,并从API端点检索数据,其中URL作为参数传入

python 复制代码
import requests

url = 'https://api.openweathermap.org/data/2.5/weather?q=Singapore&APPID=YOUR_API_KEY'

response = requests.get(url)

# check whether the request is successful
# 200 indicates successful
# 400 indicates the request was invalid
# 500 indicates an error occureed on the surver
if response.status_code == 200:
	# convert response content into a dictionary or list
	weather_data = response.json() # output
	
	weather_description = weather_data['weather'][0]['description']
	print(weather_description)
else:
	print("An error occurred.")

JSON

JavaScript Object Notation

  • 一种轻量级数据交换格式(无需额外标签)
  • 基于文本,且与平台无关,在不同应用程序之间交换数据时非常流行
  • 由对象({})和数组([])组成,以层次化的树状格式进行结构化

response data

python 复制代码
import json
import requests
import pandas as pd

url = 'https://api.openweathermap.org/data/2.5/weather?q=Singapore&APPID=YOUR_API_KEY'

response = requests.get(url)

# parse the JSON response data into a dictionary
weather_data = json.loads(response.content)

# convert JSON data into a DataFrame
df = pd.json_normalize(weather_data)
print(pd)

pass URL parameters

  • 在?后添加URL参数
  • q=value: 特定的查询条件
  • appid=your_api_key: 传入你的API key
  • &: 不同参数用&分隔开
python 复制代码
import requests

URL = 'https://api.openweathermap.org/data/2.5/weather'

# parameters are stored in key-value pair structure
PARAMETERS = {
		"q": "Singapore",
		"appid": "your_api_key",
		"units": "imperial"
}

# params require a dictionary
response = requests.get(URL, params=PARAMETERS)

# parse the JSON response data into a dictionary
data = response.json()

# convert temperature form Kelvins to Celsius
temperature = data['main']['temp'] - 273.15

print(f"The current temperature in Singapore is {round(temperatire,2)} degrees Celsius.")
相关推荐
Good_tea_h4 分钟前
如何实现Java中的多态性
java·开发语言·python
IT毕设梦工厂33 分钟前
计算机毕业设计选题推荐-项目评审系统-Java/Python项目实战
java·spring boot·python·django·毕业设计·源码·课程设计
零 度°42 分钟前
Qiskit:量子计算的Python工具包
python
飘逸高铁侠1 小时前
使用Python实现多个PDF文件的合并
开发语言·python·pdf
yuvenhol2 小时前
火山引擎大模型语音合成双向流式-python demo
开发语言·python·火山引擎
~在杰难逃~2 小时前
Day15笔记-函数返回值&函数的封装&匿名函数
开发语言·笔记·python·pycharm·数据分析
coderYYY2 小时前
CSS实现原生table可拖拽调整列宽
前端·css·html·css3
计算机学姐3 小时前
基于python+django+vue的农业管理系统
开发语言·vue.js·后端·python·django·pip·web3.py
洪大宇3 小时前
Windows Python 指令补全方法
开发语言·python
爱技术的小伙子3 小时前
【30天玩转python】面向对象编程基础
开发语言·python