Python 爬虫基础

Python 爬虫基础

1.1 理论

在浏览器通过网页拼接【/robots.txt】来了解可爬取的网页路径范围

例如访问: https://www.csdn.net/robots.txt

User-agent: *

Disallow: /scripts

Disallow: /public

Disallow: /css/

Disallow: /images/

Disallow: /content/

Disallow: /ui/

Disallow: /js/

Disallow: /scripts/

Disallow: /article_preview.html*

Disallow: /tag/

Disallow: /?

Disallow: /link/

Disallow: /tags/

Disallow: /news/

Disallow: /xuexi/

通过Python Requests 库发送HTTP【Hypertext Transfer Protocol "超文本传输协议"】请求

通过Python Beautiful Soup 库来解析获取到的HTML内容

HTTP请求

HTTP响应

1.2 实践代码 【获取价格&书名】

python 复制代码
import requests
# 解析HTML
from bs4 import BeautifulSoup

# 将程序伪装成浏览器请求
head = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
}
requests = requests.get("http://books.toscrape.com/",headers= head)
# 指定编码
# requests.encoding= 'gbk'
if requests.ok:
    # file = open(r'C:\Users\root\Desktop\Bug.html', 'w')
    # file.write(requests.text)
    # file.close
    content =  requests.text
    ## html.parser 指定当前解析 HTML 元素
    soup = BeautifulSoup(content, "html.parser")
    
    ## 获取价格
    all_prices = soup.findAll("p", attrs={"class":"price_color"})
    for price in all_prices:
        print(price.string[2:])

    ## 获取名称
    all_title = soup.findAll("h3")
    for title in all_title:
        ## 获取h3下面的第一个a元素
        print(title.find("a").string)
else:
    print(requests.status_code)

1.3 实践代码 【获取 Top250 的电影名】

python 复制代码
import requests
# 解析HTML
from bs4 import BeautifulSoup
head = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
}
# 获取 TOP 250个电影名
for i in range(0,250,25):
    response = requests.get(f"https://movie.douban.com/top250?start={i}", headers= head)
    if response.ok:
        content =  response.text
        soup = BeautifulSoup(content, "html.parser")
        all_titles = soup.findAll("span", attrs={"class": "title"})
        for title in all_titles:
            if "/" not in title.string:
                print(title.string) 
    else:
        print(response.status_code)

1.4 实践代码 【下载图片】

python 复制代码
import requests
# 解析HTML
from bs4 import BeautifulSoup

head = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
}
response = requests.get(f"https://www.maoyan.com/", headers= head)
if response.ok:
    soup = BeautifulSoup(response.text, "html.parser")
    for img in soup.findAll("img", attrs={"class": "movie-poster-img"}):
        img_url = img.get('data-src')
        alt = img.get('alt')
        path = 'img/' + alt + '.jpg'
        res = requests.get(img_url)
        with open(path, 'wb') as f:
            f.write(res.content)
else:
    print(response.status_code)

1.5 实践代码 【千图网图片 - 爬取 - 下载图片】

python 复制代码
import requests
# 解析HTML
from bs4 import BeautifulSoup


# 千图网图片 - 爬取
head = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
}
# response = requests.get(f"https://www.58pic.com/piccate/53-0-0.html", headers= head)
# response = requests.get(f"https://www.58pic.com/piccate/53-598-2544.html", headers= head)
response = requests.get(f"https://www.58pic.com/piccate/53-527-1825.html", headers= head)
if response.ok:
    soup = BeautifulSoup(response.text, "html.parser")
    for img in soup.findAll("img", attrs={"class": "lazy"}):
        img_url = "https:" + img.get('data-original')
        alt = img.get('alt')
        path = 'imgqiantuwang/' + str(alt) + '.jpg'
        res = requests.get(img_url)
        with open(path, 'wb') as f:
            f.write(res.content)
else:
    print(response.status_code)
相关推荐
寒秋丶1 天前
Milvus:数据库层操作详解(二)
数据库·人工智能·python·ai·ai编程·milvus·向量数据库
十五年专注C++开发1 天前
Qt-Nice-Frameless-Window: 一个跨平台无边框窗口(Frameless Window)解决方案
开发语言·c++·qt
凯歌的博客1 天前
python虚拟环境应用
linux·开发语言·python
西柚小萌新1 天前
【深入浅出PyTorch】--8.1.PyTorch生态--torchvision
人工智能·pytorch·python
MonkeyKing_sunyuhua1 天前
can‘t read /etc/apt/sources.list: No such file or directory
python
祈祷苍天赐我java之术1 天前
如何在Java中整合Redis?
java·开发语言·redis
多喝开水少熬夜1 天前
损失函数系列:focal-Dice-vgg
图像处理·python·算法·大模型·llm
froginwe111 天前
HTML5 测验
开发语言
初学小刘1 天前
基于 U-Net 的医学图像分割
python·opencv·计算机视觉
B站计算机毕业设计之家1 天前
Python招聘数据分析可视化系统 Boss直聘数据 selenium爬虫 Flask框架 数据清洗(附源码)✅
爬虫·python·selenium·机器学习·数据分析·flask