Python爬虫基础指南
Python爬虫是自动化获取网络数据的技术,广泛应用于数据采集、市场分析等领域。以下是核心实现步骤:
1. 核心库选择
python
import requests # 发送HTTP请求
from bs4 import BeautifulSoup # HTML解析
import pandas as pd # 数据存储
2. 基础爬取流程
python
# 发送请求
response = requests.get("https://example.com/books")
response.encoding = 'utf-8' # 设置编码
# 解析HTML
soup = BeautifulSoup(response.text, 'html.parser')
# 数据提取示例
book_titles = [h2.text for h2 in soup.select('.book-title')]
book_prices = [float(div.text.strip('¥'))
for div in soup.select('.price')]
# 存储数据
df = pd.DataFrame({'书名': book_titles, '价格': book_prices})
df.to_csv('book_data.csv', index=False)
3. 关键技巧
-
反爬应对:
pythonheaders = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)', 'Cookie': 'sessionid=abc123' } response = requests.get(url, headers=headers)
-
动态页面处理(使用Selenium):
pythonfrom selenium import webdriver driver = webdriver.Chrome() driver.get(url) dynamic_content = driver.find_element_by_class('js-loaded-data').text
4. 完整案例:豆瓣图书爬虫
python
def douban_spider():
url = "https://book.douban.com/top250"
res = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(res.text, 'lxml')
books = []
for item in soup.select('.item'):
title = item.select_one('.pl2 a')['title']
rating = item.select_one('.rating_nums').text
books.append((title, float(rating)))
return pd.DataFrame(books, columns=['书名', '评分'])
df = douban_spider()
df.to_excel('豆瓣图书TOP250.xlsx')
5. 注意事项
-
遵守规则 :
- 检查
robots.txt
(如https://site.com/robots.txt
) - 设置请求间隔:
time.sleep(random.uniform(1,3))
- 检查
-
异常处理 :
pythontry: response = requests.get(url, timeout=10) except (requests.ConnectionError, requests.Timeout) as e: print(f"请求失败: {str(e)}")
-
数据清洗 :
python# 去除空白字符 clean_text = re.sub(r'\s+', ' ', raw_text).strip()
提示:对于复杂网站建议使用Scrapy框架,其内置的异步处理、管道机制和中间件能显著提升效率。