第五章:计算机视觉(Computer Vision)-项目实战之图像分类实战
第一部分:经典卷积神经网络模型 Backbone 与图像
第四节:经典卷积神经网络 ResNet 的架构讲解
1. 背景与动机
在 VGGNet 之后,卷积神经网络虽然加深了层数(达到 19 层以上),但也带来了 梯度消失与梯度爆炸 的问题,导致训练困难,效果甚至出现退化。
微软研究院在 2015 年提出了 ResNet(Residual Network) ,其核心贡献是引入了 残差学习机制(Residual Learning),大幅缓解了深层网络训练中的梯度问题。
ResNet 在 ImageNet 竞赛中取得了突破性的成绩,ResNet-152 以 152 层的深度赢得了冠军,并首次将 Top-5 错误率降低到 3.57%。
2. 核心思想:残差学习(Residual Learning)
ResNet 的关键在于 残差块(Residual Block) ,它通过 "跨层连接"(shortcut connection) 来实现输入与输出的直接相加:
H(x) = F(x) + x
其中:
-
x:输入
-
F(x):通过卷积层、激活层等学习到的残差映射
-
H(x):输出
这种设计允许网络学习 "残差",即与输入的差异,而不是直接学习完整映射,降低了训练难度。
3. 残差块(Residual Block)结构
一个典型的残差块包含:
-
两个卷积层(通常为 3×3 卷积)
-
每个卷积层后接 Batch Normalization 和 ReLU 激活函数
-
Shortcut connection 将输入 x 直接加到输出上
示意图:
Input x
│
┌─▼─────────────────┐
│ Conv → BN → ReLU │
│ Conv → BN │
└─────────────▲─────┘
│
Identity (x)
│
Add (+)
│
ReLU
4. ResNet 的主要架构
根据网络深度不同,ResNet 有多个版本:ResNet-18、ResNet-34、ResNet-50、ResNet-101、ResNet-152。
其中:
-
ResNet-18 / ResNet-34 使用基本残差块(Basic Block)
-
ResNet-50 / ResNet-101 / ResNet-152 使用瓶颈残差块(Bottleneck Block),减少计算量
Bottleneck Block 结构:
-
1×1 卷积(降维)
-
3×3 卷积(特征提取)
-
1×1 卷积(升维)
这样能在保证表达能力的同时,显著减少参数和计算量。
5. ResNet 的创新与优势
-
解决退化问题:深度增加时不会再出现精度下降
-
极深网络可训练:层数可超过 100 层甚至 1000 层
-
迁移学习基础:ResNet 成为后续许多计算机视觉模型(如 Faster R-CNN、Mask R-CNN、YOLO、Vision Transformer 等)的 backbone
-
性能卓越:在图像分类、目标检测、语义分割等任务中表现优异
6. PyTorch 实现示例
python
import torch
import torch.nn as nn
import torch.nn.functional as F
# 基本残差块 BasicBlock
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
# 构建 ResNet18
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000):
super(ResNet, self).__init__()
self.in_channels = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, out_channels, blocks, stride=1):
downsample = None
if stride != 1 or self.in_channels != out_channels * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channels, out_channels * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels * block.expansion),
)
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels * block.expansion
for _ in range(1, blocks):
layers.append(block(self.in_channels, out_channels))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
def ResNet18(num_classes=1000):
return ResNet(BasicBlock, [2, 2, 2, 2], num_classes)
# 测试
model = ResNet18(num_classes=10)
print(model)
7. 小结
-
ResNet 的残差学习思想解决了深层网络的退化问题
-
残差块通过 shortcut connection 让梯度更容易传播
-
ResNet 已成为计算机视觉任务中的经典 backbone
-
PyTorch 提供了
torchvision.models.resnet18/34/50/101/152
预训练模型,可直接调用