paper:
Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019)
High-Resolution Representations for Labeling Pixels and Regions
Deep High-Resolution Representation Learning for Visual Recognition (TPAMI)
official implementation:https://github.com/HRNet
third-party implementation:https://github.com/open-mmlab/mmsegmentation/tree/main/configs/hrnet
背景
在分类任务中,深层的低分辨率表示具有丰富的语义信息,对于整图的分类已足够。而在检测、分割、姿态估计等任务中,除了语义信息外,还需要空间细节信息,而这些信息大都在高分辨率表示中,因此高分辨率表示对于姿态估计、语义分割等视觉任务至关重要。
之前的方法通常是在得到低分辨率特征图后通过插值或转置卷积等方法上采样恢复高分辨率,例如U-Net、encoder-decoder等结构。或是用膨胀卷积替代普通卷积来保持分辨率例如DeepLab系列。而保持高分辨率表示的研究相对较少,有GridNet、interlinked CNNs以及本文提出的HRNet。前者对于何时开始低分辨率的并行表示以及如何融合多个并行结构的信息缺乏仔细的研究,并且没有使用BN以及残差连接,因此性能较差。
本文的创新点
本文设计了一种新的网络结构HRNet,以高分辨率子网络作为开始,逐步添加分辨率由高到低的子网络,并行连接多分辨率子网络,并且反复的进行多分辨率的信息融合,从而得到丰富的高分辨率表示,并且在整个网络过程中都保持高分辨率的表示。
HRNet最早针对姿态估计任务提出即HRNetV1,后来作者做了一些改进并迁移到了语义分割和目标检测任务中,提出了HRNetV2和HRNetV2p。三者的区别如下所示
代码解析
HRNet的结构如下所示,一目了然,因此就不再过多介绍,直接根据代码实现来了解网络结构的一些细节。
这里以mmsegmentation中的实现为例,具体代码在mmsegmentation/mmseg/models/backbones/hrnet.py中。配置文件中网络结构的定义如下所示,其中4个stage与图1对应,num_modules表示hr_module的个数,图1中每个stage都只画了1个module,num_branches表示多分辨率分支的个数,即图1中每个stage中特征图的行数。block表示每个module中基础block的类型,对应图1中的每个channel map,除了第一个stage中是bottleneck,其余的都是basic_block,两者的区别见https://blog.csdn.net/ooooocj/article/details/122226401。num_blocks表示每个module的每个分支中block的个数,可以看到num_blocks元组的元素个数就是分支的个数num_branches,num_channels也是一样。num_channels表示这个stage中每个分支最终输出的通道数。
输入shape为(16, 3, 480, 480),第一个stage代码如下,首先经过self.conv1 和self.conv2两个stride=2的3x3卷积,这一部分在图1中没画出来。
python
x = self.conv1(x) # (16,64,240,240)
x = self.norm1(x)
x = self.relu(x)
x = self.conv2(x) # (16,64,120,120)
x = self.norm2(x)
x = self.relu(x)
x = self.layer1(x) # (16,256,120,120)
x_list = []
for i in range(self.stage2_cfg['num_branches']):
if self.transition1[i] is not None:
x_list.append(self.transition1[i](x))
else:
x_list.append(x) # [(16,18,120,120), (16,36,60,60)]
self.layer1就是图1第一阶段的前4个channel map,具体结构如下,包含4个Bottleneck。
self.transition1的结构如下,对应图1第一个stage中最后分成2行的两个channel map,其中第一行保持分辨率不变,第二行下采样分辨率减半,通道数翻倍。
接下来stage2、3、4的流程是一样的,都比stage1多了一个多分辨率融合的过程,下面以stage3为例解析一下。
python
y_list = self.stage2(x_list) # [(16,18,120,120), (16,36,60,60)]
x_list = []
for i in range(self.stage3_cfg['num_branches']):
if self.transition2[i] is not None:
x_list.append(self.transition2[i](y_list[-1]))
else:
x_list.append(y_list[i]) # [(16,18,120,120), (16,36,60,60), (16,72,30,30)]
y_list = self.stage3(x_list) # [(16,18,120,120), (16,36,60,60), (16,72,30,30)]
x_list = []
for i in range(self.stage4_cfg['num_branches']):
if self.transition3[i] is not None:
x_list.append(self.transition3[i](y_list[-1]))
else:
x_list.append(y_list[i]) # [(16,18,120,120), (16,36,60,60), (16,72,30,30), (16,144,15,15)]
y_list = self.stage4(x_list) # [(16,18,120,120), (16,36,60,60), (16,72,30,30), (16,144,15,15)]
如图1所示,stage3的输入是经过transition2的3个分支的输出。并且由配置文件可以看出,stage3中num_modules=4,但每个module中的操作都是一样的,3个分支每个分支都经过4个basic block,最后通过fuse_layers对3个分支进行融合。一个module的具体结构如下所示
python
(0): HRModule(
(branches): ModuleList(
(0): Sequential(
(0): BasicBlock(
(conv1): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(1): BasicBlock(
(conv1): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): BasicBlock(
(conv1): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): BasicBlock(
(conv1): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(18, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(1): BasicBlock(
(conv1): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): BasicBlock(
(conv1): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): BasicBlock(
(conv1): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(1): BasicBlock(
(conv1): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): BasicBlock(
(conv1): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): BasicBlock(
(conv1): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): _BatchNormXd(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
)
fuse_layers的具体结构如下
实现代码如下
python
def _make_fuse_layers(self):
"""Build fuse layer."""
if self.num_branches == 1:
return None
num_branches = self.num_branches
in_channels = self.in_channels
fuse_layers = []
num_out_branches = num_branches if self.multiscale_output else 1
for i in range(num_out_branches):
fuse_layer = []
for j in range(num_branches):
if j > i:
fuse_layer.append(
nn.Sequential(
build_conv_layer(
self.conv_cfg,
in_channels[j],
in_channels[i],
kernel_size=1,
stride=1,
padding=0,
bias=False),
build_norm_layer(self.norm_cfg, in_channels[i])[1],
# we set align_corners=False for HRNet
Upsample(
scale_factor=2**(j - i),
mode='bilinear',
align_corners=False)))
elif j == i:
fuse_layer.append(None)
else:
conv_downsamples = []
for k in range(i - j):
if k == i - j - 1:
conv_downsamples.append(
nn.Sequential(
build_conv_layer(
self.conv_cfg,
in_channels[j],
in_channels[i],
kernel_size=3,
stride=2,
padding=1,
bias=False),
build_norm_layer(self.norm_cfg,
in_channels[i])[1]))
else:
conv_downsamples.append(
nn.Sequential(
build_conv_layer(
self.conv_cfg,
in_channels[j],
in_channels[j],
kernel_size=3,
stride=2,
padding=1,
bias=False),
build_norm_layer(self.norm_cfg,
in_channels[j])[1],
nn.ReLU(inplace=False)))
fuse_layer.append(nn.Sequential(*conv_downsamples))
fuse_layers.append(nn.ModuleList(fuse_layer))
return nn.ModuleList(fuse_layers)
x_fuse = []
for i in range(len(self.fuse_layers)):
y = 0
for j in range(self.num_branches):
if i == j:
y += x[j]
elif j > i:
y = y + resize(
self.fuse_layers[i][j](x[j]),
size=x[i].shape[2:],
mode='bilinear',
align_corners=False)
else:
y += self.fuse_layers[i][j](x[j])
x_fuse.append(self.relu(y))
从代码实现和具体结构中可以看出,融合层每个分支的输出都融合了所有分支的输入。当输入和输出是同一分支即分辨率相同时,操作为None即直接加上输入。当输入分支的分辨率小于输出时,不管是差了2倍4倍还是几倍,输入直接通过双线性插值上采样resize到输出大小。当输入分辨率大于输出的,是通过多个stride=2的3x3卷积对齐分辨率大小的,即分辨率差了2倍时通过1个stride=2的3x3卷积进行下采样,差4倍时通过2个卷积下采样,并且只有最后一个卷积改变了通道数,前面的所有下采样卷积的通道数都不变。
当所有module都完成后,最后进行transition,对最小分辨率的输出再进行stride=2的下采样,self.transition3的结构如下