一、爬虫理论
爬虫------请求网站并提取数据的自动化程序
网络爬虫(又被称为网页蜘蛛,网络机器人)就是模拟客户端发送网络请求,接收请求响应,一种按照一定的规则,自动的抓取互联网信息的程序。
原则上,只要是浏览器(客户端)能做的事情,爬虫都能够做,也就是说万物皆可爬,可视即可爬
爬虫能抓取拿些数据?
-
网页文本
-
图片
-
视频,音频
-
其他(只要能请求到的 就意味着都能获取到)
二、request模块
作用:发送网络请求,获得响应数据
官方文档:https://requests.readthedocs.io/zh_CN/latest/index.html
Requests是用python语言基于urllib编写的,采用的是Apache2 Licensed开源协议的HTTP库
它比urllib更加方便,可以节约大量的工作,完全满足HTTP测试需求的库
⼀句话------Requests是一个Python代码编写的HTTP请求库,方便在代码中模拟浏览器发送http请求a
安装命令
pip install requests -i https://pypi.tuna.tsinghua.edu.cn/simple
1.Requests请求
# https://www.baidu.com/
import requests
response = requests.get('https://www.baidu.com/')
print(response) # 响应体对象(响应源码+响应状态码+响应URL)
print(response.text) # 查看响应体内容
print(type(response.text)) # 查看响应内容的数据类型
print(response.status_code) # 查看响应状态码
print(response.url) # 查看响应url
各种请求方式
import requests
requests.get('http://httpbin.org/get') # GET请求
requests.post('http://httpbin.org/post') # POST请求
requests.put('http://httpbin.org/put')
requests.delete('http://httpbin.org/delete')
requests.head('http://httpbin.org/get')
requests.options('http://httpbin.org/get')
1.基于get请求
# 第一种写法
#https://www.baidu.com/s?wd=%E9%97%B9%E9%97%B9&base_query=%E9%97%B9%E9%97%B9&pn=10&oq=%E9%97%B9%E9%97%B9&ie=utf-8&usm=1&rsv_pq=a4f4e52200027b13&rsv_t=82600eHOUMYEzX16IwoPl%2BnK%2FnzM6jy5R9dFD9dBFEwqYVTYCFyzaCudbQA
url= 'http://httpbin.org/get?age=12&name=naonao'#&和
r = requests.get(url)
print(r.status_code)
print(r.text)
第二种写法
data = {
'name':'lisi',
'age':10
}
url = 'http://httpbin.org/get'
r = requests.get(url,params = data) #params:携带get请求的参数
print(r.text)
2.基于Post请求
url ='http://httpbin.org/post'
d = {'lisi':10}
r = requests.post(url,data=d) #data:携带post请求的参数
print(r.text)
3.获取json数据
import json
url = 'http://httpbin.org/get'
r = requests.get(url)
# print(r.status_code) # 查看响应状态码
a = r.text
# print(a)
# print(type(a)) # 查看数据类型
#利用内置模块 json
dict_data = json.loads(a)#str 转为dict
print(dict_data)
print(type(dict_data))
res = dict_data['url']
print(res)
# res = dict_data['Host']#错误演示 报错
res = dict_data['headers']['Host']
print(res)
json_data = r.json()
print(json_data)
print(type(json_data))
4. .content 获取二进制数据
url = 'https://www.baidu.com/img/baidu_jgylogo3.gif'
r = requests.get(url) # 01010101
print(r.content) # content:获取二进制数据
print(type(r.content))
with open('bdu.png','wb')as f:
f.write(r.content)
"""
bytes类型是指一堆字节的集合,在python中以b开头的字符串都是bytes类型
Bytes类型的作用:
1, 在python中, 数据转成2进制后不是直接以0101010的形式表示的,而是用一种叫bytes(字节)的类型来表示
2,计算机只能存储2进制, 我们的字符、图片、视频、音乐等想存到硬盘上,也必须以正确的方式编码成2进制后再存。
记住一句话:在python中,字符串必须编码成bytes后才能存到硬盘上
"""
5.添加headers
# 目标站点 -- 知乎 :https://www.zhihu.com/explore
url ='https://www.zhihu.com/explore'
#组建身份信息
h = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36'
}
r = requests.get(url,headers = h) #headers = (关键字);携带伪装参数
print(r.status_code)
print(r.text)
2.Response响应
1.response属性
# 目标网站 -- :http://www.jianshu.com
import requests
url = 'http://www.jianshu.com'
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
r = requests.get(url,headers = h,allow_redirects=False)
print(r.status_code) # 查看响应状态码
# 查看响应头信息
print(r.headers)
# 查看url
print(r.url)
# 查看网页是否跳转
print(r.history)
# # 禁止网页跳转 allow_redirects=False #默认是为True
#为空就是没有跳转
2.状态码判断
200 请求成功
301、302 请求发生跳转
404 页面没找到
500 502 503服务器内部错误
3.高级操作
会话维持
http/https 协议 是一种无状态的协议,对事物处理无记忆功能,所以每次请求都是一个独立状态
会话维持作用:跨请求时保持住某些参数
为了解决无状态协议 就有了cookie和session的出现
(1).通过cookie维持会话
# 通过cookie维持的只是一种状态(只能知道你是登录还是未登录的状态) 用户信息(账号+密码)
'''
应用场景:
碰到一定需要登录才可以访问的网站,那爬虫也需要携带上登录后的信息(cookie)
好处: 能请求需要登录才可以访问的页面
坏处:会大大提高你被反爬的几率 (换账号)
'''
import requests
# 构建身份伪装的时候 字典内是可以放多条信息的
head = {
'cookie': 'UM_distinctid=1837e7ffbf6dda-0360993970b77c-78565470-1fa400-1837e7ffbf7fa1; CNZZDATA1279807957=36991445-1664272379-null%7C1664272379; _uab_collina=166427466049560581446295; ssxmod_itna=eqUxBDuDnGeYw4Dq0dc7tD9mat+8qY5mQwoAQQD/KDfO4iNDnD8x7YDvIAaQAC+BAaeqeW=exIiDOrQ4Liq4EbmfvWUDB3DEx0=etI74iiBDCeDIDWeDiDG4GmS4GtDpxG=Dj0FUZMUxi3Dbh=Df4DmDGY9QqDgDYQDGTI6D7QDIw6QiY49bCLKLLRgiY2xKG=DjMbD/fPFMQRFs=+ZbjnTPWPuDB=zxBjMfwXUfeDHzLNlelePWx+o/2mF7xhxQYeoYrh3tPQ4BY4jiAp=RGDE04wcwDDAKY+d0bD==; ssxmod_itna2=eqUxBDuDnGeYw4Dq0dc7tD9mat+8qY5mQwoAQG9b=QDBk7De7P5vA8vx5G8CqvxAP5uiBRGi+tGhYO=HOetAYQP4kUmGujkh3ZoXUhqt1g7pUjb/E8nwv8QSi91kU9A9G71OXCSLWO58e=yb4K7HT6INeUv1MEAse+FmNhu8d=oaa8AiF6npbSmrCS4hs8SHOt3sDUWEN/nhX/p45oWW1F7fMKLU/aOCcg8Lyajs6oLfPTmrZryUvx=8gcm0SZerB7WRdxWihgCHghf8Mc1LBjmCAt9zjVegGgOl4P9bA5wiD6ebGTc7jRDCrzRcNjyTlvxcHiBH29wMzbqyirR0WBhqQ0bFuiWvd4+8jGevm5gBtm7m3Te/Y4lyD4xbQ49Ywc/A8nyKMwxvqO3ohBH3D07LeDLxD2+GDD==; __bid_n=185a0f06f0b9f33b234207; read_mode=day; default_font=font2; locale=zh-CN; Hm_lvt_0c0e9d9b1e7d617b3e6842e85b9fb068=1677674664; BAIDU_SSP_lcr=http://localhost:8888/; FPTOKEN=9aR7Q74Pjtiby0hWs+8OC+QuVOuULdxF5RiFXsZCxtZguI4t4pXZPWszXvIKiK5qyyCISfrCGkkXxCA/TmGCsKuxXIGsUt0roo1Hk3FWuN0e4on1QJ9i7u/gT4EFkPuKLJrifNufedeW1Iws1qij1n9hh3ZLO4OdJ7VWD0yWRzJ9qRkG+yCCD6pP1rDG25+NbR5ccJZZ+sQ6hi1BUUyXc8437EaVk2A/qQxIfUCmWaRr9OHUoRQgqFweCl88gTeUVR447ln1BlWFqssikSTtZNP+JSaCXMgU9Va9DgCHUmf++p+Yq5Vs0/hrUuuYeUhTGOyZchV0pgatqZusu3U60dl/OHN3UxkkUk83paSy0Ietzu0yaWA1CLfmZLRDnnL/5ZctdbYY+g3Sk9CYNtPH0w==|hYfAlXpGQjGm4kMaq/PWT6E+PblQyaQrBzFkXalOeM4=|10|0cd8f2c4502ad7265689616b0e83b86a; _ga=GA1.2.1296484011.1677675227; _gid=GA1.2.232229311.1677675227; remember_user_token=W1syNjQ2MDkwMF0sIiQyYSQxMSRJUi9IRks4T1BUWFo5bW14Q3BTbTIuIiwiMTY3NzY3NTM2Mi44ODgwNzU0Il0%3D--316f3042074c36712cf27541f93abdf363f05116; web_login_version=MTY3NzY3NTM2Mg%3D%3D--a8c5d5e8df0d0cd0ecb44d8732cfcd62b5c2646a; _m7e_session_core=04577b356d69a86184b1b26468ff8726; _gat=1; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2226460900%22%2C%22first_id%22%3A%221837e7ffb9ee32-078ff4508ee4b4-78565470-2073600-1837e7ffb9f483%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%A4%BE%E4%BA%A4%E7%BD%91%E7%AB%99%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%2Fopen.weixin.qq.com%2F%22%7D%2C%22%24device_id%22%3A%221837e7ffb9ee32-078ff4508ee4b4-78565470-2073600-1837e7ffb9f483%22%7D; Hm_lpvt_0c0e9d9b1e7d617b3e6842e85b9fb068=1677675376',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36',
}
response = requests.get("https://www.jianshu.com/",headers = head)
print(response.text)
(2).通过session维持会话
#创建一个session对象
#通过session发请求 此时就从无状态变成了有状态
import requests
s = requests.session()
# s.get('https://www.baidu.com/')
res = s.get('https://www.baidu.com/s?wd=python&base_query=python&pn=90&oq=python&ie=utf-8&usm=4&rsv_pq=f41c8f1d000399cb&rsv_t=ed15VorzvvhgKbeJyyNkbIwWFsEJ3dAclQ5pyPrU0ZBWA2UfX%2FBrXv%2BFsnE')
print(res.text)
代理设置
# 目标站点:https://www.baidu.com
import requests
url= 'https://www.baidu.com'
# 组建IP信息
p ={
'http':'114.231.46.240:8888',
'https':'114.231.46.240:8888',
}
r = requests.get(url,proxies = p ) # proxies:携带IP信息
print(r.status_code)
超时设置
# 目标站点 : http://baidu.com
import requests
url= 'https://www.baidu.com'
r = requests.get(url,timeout = 2) #timeout关键字用来设置响应时间
print(r.status_code)
异常处理
url= 'https://www.baidu.com'
try:#你可能会出现异常的代码
r = requests.get(url,timeout = 0.0000000001)
print(r.status_code)
except:#解决
print('timeout!')