以下是一些常见的 Python 爬虫案例,涵盖了不同的应用场景和技术点:
- 简单网页内容爬取
案例:爬取网页标题和简介
import requests
from bs4 import BeautifulSoup
url = "https://www.runoob.com/"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
title = soup.title.string
description = soup.find('meta', attrs={'name': 'description'})['content']
print(f"标题: {title}")
print(f"简介: {description}")
- 爬取图片
案例:爬取图片网站并下载图片
import os
import requests
from bs4 import BeautifulSoup
url = "https://unsplash.com/s/photos/nature"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
创建文件夹存储图片
if not os.path.exists('images'):
os.makedirs('images')
找到所有图片标签
img_tags = soup.find_all('img')
for idx, img in enumerate(img_tags):
img_url = img['src']
下载图片
img_data = requests.get(img_url).content
with open(f'images/img_{idx}.jpg', 'wb') as handler:
handler.write(img_data)
- 爬取数据并存储
案例:爬取豆瓣电影 Top250 并存储到 CSV
import csv
import requests
from bs4 import BeautifulSoup
url = "https://movie.douban.com/top250"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
movies = []
for item in soup.select('.item'):
title = item.select('.title')[0].get_text()
rating = item.select('.rating_num')[0].get_text()
director = item.select('.bd p')[0].get_text().split('\n')[1].strip().split('/')[0]
movies.append([title, rating, director])
写入 CSV 文件
with open('douban_top250.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['标题', '评分', '导演'])
writer.writerows(movies)
- 动态网页爬取
案例:使用 Selenium 爬取动态加载的网页
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
import time
启动浏览器
driver = webdriver.Chrome()
driver.get("https://www.jd.com")
搜索商品
search_box = driver.find_element(By.ID, 'key')
search_box.send_keys('笔记本电脑')
search_box.send_keys(Keys.RETURN)
time.sleep(3) # 等待页面加载
获取商品列表
products = driver.find_elements(By.CLASS_NAME, 'gl-item')
for product in products:
try:
name = product.find_element(By.CLASS_NAME, 'p-name').text
price = product.find_element(By.CLASS_NAME, 'p-price').text
print(f"商品: {name}, 价格: {price}")
except Exception as e:
print(e)
driver.quit()
- API 数据爬取
案例:爬取 GitHub API 数据
import requests
获取 Python 仓库信息
url = "https://api.github.com/search/repositories?q=language:python\&sort=stars"
response = requests.get(url)
data = response.json()
for item in data['items']:
name = item['name']
description = item['description']
stars = item['stargazers_count']
print(f"仓库: {name}, 描述: {description}, 星数: {stars}")
- 爬取登录后的数据
案例:模拟登录并爬取数据
import requests
login_url = "https://example.com/login"
data_url = "https://example.com/dashboard"
登录信息
payload = {
'username': 'your_username',
'password': 'your_password'
}
使用会话保持登录状态
with requests.Session() as session:
发送登录请求
session.post(login_url, data=payload)
访问需要登录的页面
response = session.get(data_url)
print(response.text)
注意事项
-
遵守网站规则:在爬取之前,查看目标网站的 robots.txt 文件,了解哪些页面允许爬取。
-
设置合理的请求间隔:避免频繁请求导致服务器压力过大或被封禁。
-
处理反爬机制:如果遇到反爬,可以尝试使用代理 IP、设置请求头(User-Agent)等方法。
-
合法性:确保爬取的数据和行为符合法律法规。
这些案例可以帮助你快速上手 Python 爬虫开发,根据实际需求选择合适的技术和工具。