残差网络的思想
随着网络深度的增加,网络能获取的信息量随之增加,而且提取的特征更加丰富。然而在残差结构提出之前,实验证明,随着网络层数的增加,模型的准确率起初会不断提高,直至达到最大饱和值。然后,随着网络深度进一步增加,模型的准确率不再增加,反而可能出现明显的降低,这被称为"退化问题",该问题的发生主要是由于深度神经网络训练中的梯度消失和梯度爆炸问题。
在传统的深度神经网络中,随着网络层数的增加,反向传播的梯度会逐渐减小或增大,导致网络难以收敛或变得不稳定。该现象的一种解释是,当网络变得非常深时,低层参数的微小变动会引起高层参数的剧烈变化,使得优化算法难以找到最优解。何恺明等人于 2015 年提出的残差网络 ResNet旨在解决这一问题。通过引入残差模块,残差网络允许梯度通过跳过一定数量的层来传播,使得即便是很深的网络也能更容易地进行训练。
假设网络的输入是 x x x, 期望输出为 H ( x ) H(x) H(x),我们转化一下思路,把网络要学到的 H ( x ) H(x) H(x)转化为期望输出 H ( x ) H(x) H(x)与输出 x x x之间的差值 F ( x ) = H ( x ) − x F(x) = H(x) - x F(x)=H(x)−x。当残差接近为0时, 相当于网络在此层仅仅做了恒等变换,而不会使网络的效果下降。
残差模块如图所示, X l X_l Xl在残差学习模块中充当输入,同时通过跳跃连接传递到输出。假设两个卷积层学习到的信息增量为 F ( X l ) F(X_l) F(Xl),则最终残差学习模块的输出为 F ( X l ) + X l F(X_l) + X_l F(Xl)+Xl。残差模块专注于学习残差信息 F ( X l ) F(X_l) F(Xl),即模块输出相比输入的信息增量。其核心思想可表示为:
X l + 1 = X l + F ( X l ) . X_{l+1} = X_l + F(X_l). Xl+1=Xl+F(Xl).
有两种常用的残差结构,如图所示。其中(a)在网络层数较少(18/34 层)时使用,(b)在网络层数较多(50/101/152 层)时使用,其中 1×1 的卷积核用来降维和升维,可以大大减少参数量。
残差模块(Residual Block)是深度卷积神经网络(如ResNet)的基本构建单元。它通过引入"快捷连接"(skip connections)解决了深度神经网络中的梯度消失问题,从而使得更深层次的网络可以有效地训练。残差模块的核心思想是学习输入和输出之间的残差函数,而不是直接学习输入到输出的映射。这样可以让网络更容易优化。
残差模块的基本结构
一个典型的残差模块包含两条路径:
-
主路径(主分支):
- 通过多个卷积层、批归一化层和非线性激活函数来提取特征。
-
快捷连接(残差分支):
- 将输入直接连接到输出,从而实现残差学习。
主路径和快捷连接的输出相加后,再通过一个激活函数(通常是ReLU)得到模块的最终输出。
标准残差模块
以下是一个标准的残差模块结构:
python
import torch
import torch.nn as nn
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = None
if stride != 1 or in_channels != out_channels:
self.downsample = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels),
)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
在这个 BasicBlock
中:
- 主路径:包含两个3x3卷积层,每个卷积层后面都有一个批归一化层和ReLU激活函数。第二个卷积层的步幅为1,确保输出特征图的尺寸与输入特征图一致(除非进行了下采样)。
- 快捷连接 :将输入直接添加到主路径的输出。如果输入和输出的通道数不同或进行了下采样,则通过
downsample
层进行匹配。
Bottleneck 残差模块
在更深的ResNet版本(如ResNet-50、ResNet-101、ResNet-152)中,使用了Bottleneck残差模块。这种模块通过减少中间层的通道数来减少计算量,同时保持网络的表达能力。
python
import torch
import torch.nn as nn
from collections import OrderedDict
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1):
super().__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = None
if stride != 1 or inplanes != planes * self.expansion:
self.downsample = nn.Sequential(
nn.Conv2d(inplanes, planes * self.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * self.expansion),
)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
在这个 Bottleneck
模块中:
- 主路径:包含三个卷积层。第一个卷积层是1x1卷积,用于减少通道数。第二个卷积层是3x3卷积,用于特征提取。第三个卷积层是1x1卷积,用于恢复通道数。每个卷积层后面都有批归一化层和ReLU激活函数。
- 快捷连接 :将输入直接添加到主路径的输出。如果输入和输出的通道数不同或进行了下采样,则通过
downsample
层进行匹配。
残差模块的优势
- 缓解梯度消失问题:通过直接的快捷连接,使得梯度可以更有效地反向传播到较浅的层。
- 更深的网络结构:使用残差模块,可以构建非常深的网络(如ResNet-152),而不会出现严重的退化问题。
- 更好的性能:残差模块在多个计算机视觉任务中取得了优秀的性能,如图像分类、目标检测和语义分割等。
总结
残差模块通过引入快捷连接,使得深层神经网络能够更有效地训练。标准的残差模块包含两条路径:主路径和快捷连接。Bottleneck 残差模块通过引入1x1卷积层进一步减少计算量,同时保持网络的表达能力。通过这些设计,残差网络在计算机视觉任务中取得了显著的成功。
完整代码
python
"""resnet in pytorch
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
Deep Residual Learning for Image Recognition
https://arxiv.org/abs/1512.03385v1
"""
import torch
import torch.nn as nn
class BasicBlock(nn.Module):
"""Basic Block for resnet 18 and resnet 34
"""
#BasicBlock and BottleNeck block
#have different output size
#we use class attribute expansion
#to distinct
expansion = 1
def __init__(self, in_channels, out_channels, stride=1):
super().__init__()
#residual function
self.residual_function = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels * BasicBlock.expansion, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(out_channels * BasicBlock.expansion)
)
#shortcut
self.shortcut = nn.Sequential()
#the shortcut output dimension is not the same with residual function
#use 1*1 convolution to match the dimension
if stride != 1 or in_channels != BasicBlock.expansion * out_channels:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * BasicBlock.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels * BasicBlock.expansion)
)
def forward(self, x):
return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))
class BottleNeck(nn.Module):
"""Residual block for resnet over 50 layers
"""
expansion = 4
def __init__(self, in_channels, out_channels, stride=1):
super().__init__()
self.residual_function = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels * BottleNeck.expansion, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channels * BottleNeck.expansion),
)
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels * BottleNeck.expansion:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, stride=stride, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channels * BottleNeck.expansion)
)
def forward(self, x):
return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))
class ResNet(nn.Module):
def __init__(self, block, num_block, num_classes=100):
super().__init__()
self.in_channels = 64
self.conv1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True))
#we use a different inputsize than the original paper
#so conv2_x's stride is 1
self.conv2_x = self._make_layer(block, 64, num_block[0], 1)
self.conv3_x = self._make_layer(block, 128, num_block[1], 2)
self.conv4_x = self._make_layer(block, 256, num_block[2], 2)
self.conv5_x = self._make_layer(block, 512, num_block[3], 2)
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, out_channels, num_blocks, stride):
"""make resnet layers(by layer i didnt mean this 'layer' was the
same as a neuron netowork layer, ex. conv layer), one layer may
contain more than one residual block
Args:
block: block type, basic block or bottle neck block
out_channels: output depth channel number of this layer
num_blocks: how many blocks per layer
stride: the stride of the first block of this layer
Return:
return a resnet layer
"""
# we have num_block blocks per layer, the first block
# could be 1 or 2, other blocks would always be 1
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_channels, out_channels, stride))
self.in_channels = out_channels * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
output = self.conv1(x)
output = self.conv2_x(output)
output = self.conv3_x(output)
output = self.conv4_x(output)
output = self.conv5_x(output)
output = self.avg_pool(output)
output = output.view(output.size(0), -1)
output = self.fc(output)
return output
def resnet18():
""" return a ResNet 18 object
"""
return ResNet(BasicBlock, [2, 2, 2, 2])
def resnet34():
""" return a ResNet 34 object
"""
return ResNet(BasicBlock, [3, 4, 6, 3])
def resnet50():
""" return a ResNet 50 object
"""
return ResNet(BottleNeck, [3, 4, 6, 3])
def resnet101():
""" return a ResNet 101 object
"""
return ResNet(BottleNeck, [3, 4, 23, 3])
def resnet152():
""" return a ResNet 152 object
"""
return ResNet(BottleNeck, [3, 8, 36, 3])