目录
一、【SAConv】卷积
1.1【SAConv】卷积介绍
下图是【SAConv】的结构图,让我们简单分析一下运行过程和优势
处理过程:
- Pre-Global Context(前全局上下文):
- 输入特征图首先经过一个 1×1卷积和全局平均池化(Global Average Pooling)。这一步的作用是进行通道间的信息压缩和增强,提取全局的上下文信息。
- 提取到的全局上下文信息与输入特征进行逐元素加法操作(即融合了全局上下文信息),为后续的特征提取做好准备。
- Switchable Atrous Convolution(可切换膨胀卷积):
- 这一部分是模块的核心,包含两个并行分支:
膨胀卷积分支:使用不同膨胀率的 3×3卷积进行处理,一个分支的膨胀率为 1,另一个分支的膨胀率为 3。膨胀卷积有助于在不增加计算成本的情况下扩大卷积的感受野,捕捉更丰富的多尺度信息。 - 平均池化分支:进行 5×5的平均池化操作,进一步获取局部信息。
- 可切换机制:通过一个动态的加权参数 𝑆,可以在膨胀卷积和平均池化的结果之间进行切换。这意味着根据任务的不同需求,模块可以自适应地调整对局部细节和全局特征的关注。
- 最后,两个分支的结果通过一个 1×1卷积进行整合。
- Post-Global Context(后全局上下文):
- 经过可切换膨胀卷积模块后,特征图再次进行全局上下文增强,即经过 1×1卷积和全局平均池化的处理,再次整合全局信息。
- 这个步骤的设计确保了特征图在细节和全局信息之间得到平衡。
优势:
- 多尺度特征提取:
- 通过膨胀卷积,模块可以捕捉到不同尺度的特征信息,既能够处理细节特征,又能提取全局信息。不同膨胀率的卷积能够在同一层次中捕获局部和远距离的特征,提升对复杂场景的感知能力。
- 自适应调整:
- 使用可切换机制(S 参数),模块能够在膨胀卷积和池化之间进行动态调整,适应不同任务和图像的需求。这样的设计使得网络在处理不同特征时更加灵活,增强了特征学习的能力。
- 全局上下文增强:
- 在卷积操作前后引入全局上下文信息,确保模型在提取局部细节的同时,也不会丢失全局结构。这种设计可以让模型在处理复杂的场景时,既保持对局部特征的敏感度,又能对整体结构进行把控。
- 计算效率高:
- 通过在不同尺度上分离计算,尤其是在膨胀卷积的基础上,模块能够高效地提取多尺度信息,而不显著增加计算量。膨胀卷积扩大了感受野但没有增加计算复杂度,确保了模型在高效性和性能上的平衡。
1.2【SAConv】核心代码
python
import torch
import torch.nn as nn
from ultralytics.nn.modules.conv import autopad, Conv
__all__ = ['SAConv2d']
class ConvAWS2d(nn.Conv2d):
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=True):
super().__init__(
in_channels,
out_channels,
kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias)
self.register_buffer('weight_gamma', torch.ones(self.out_channels, 1, 1, 1))
self.register_buffer('weight_beta', torch.zeros(self.out_channels, 1, 1, 1))
def _get_weight(self, weight):
weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2,
keepdim=True).mean(dim=3, keepdim=True)
weight = weight - weight_mean
std = torch.sqrt(weight.view(weight.size(0), -1).var(dim=1) + 1e-5).view(-1, 1, 1, 1)
weight = weight / std
weight = self.weight_gamma * weight + self.weight_beta
return weight
def forward(self, x):
weight = self._get_weight(self.weight)
return super()._conv_forward(x, weight, None)
def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
missing_keys, unexpected_keys, error_msgs):
self.weight_gamma.data.fill_(-1)
super()._load_from_state_dict(state_dict, prefix, local_metadata, strict,
missing_keys, unexpected_keys, error_msgs)
if self.weight_gamma.data.mean() > 0:
return
weight = self.weight.data
weight_mean = weight.data.mean(dim=1, keepdim=True).mean(dim=2,
keepdim=True).mean(dim=3, keepdim=True)
self.weight_beta.data.copy_(weight_mean)
std = torch.sqrt(weight.view(weight.size(0), -1).var(dim=1) + 1e-5).view(-1, 1, 1, 1)
self.weight_gamma.data.copy_(std)
class SAConv2d(ConvAWS2d):
def __init__(self,
in_channels,
out_channels,
kernel_size,
s=1,
p=None,
g=1,
d=1,
act=True,
bias=True):
super().__init__(
in_channels,
out_channels,
kernel_size,
stride=s,
padding=autopad(kernel_size, p, d),
dilation=d,
groups=g,
bias=bias)
self.switch = torch.nn.Conv2d(
self.in_channels,
1,
kernel_size=1,
stride=s,
bias=True)
self.switch.weight.data.fill_(0)
self.switch.bias.data.fill_(1)
self.weight_diff = torch.nn.Parameter(torch.Tensor(self.weight.size()))
self.weight_diff.data.zero_()
self.pre_context = torch.nn.Conv2d(
self.in_channels,
self.in_channels,
kernel_size=1,
bias=True)
self.pre_context.weight.data.fill_(0)
self.pre_context.bias.data.fill_(0)
self.post_context = torch.nn.Conv2d(
self.out_channels,
self.out_channels,
kernel_size=1,
bias=True)
self.post_context.weight.data.fill_(0)
self.post_context.bias.data.fill_(0)
self.bn = nn.BatchNorm2d(out_channels)
self.act = Conv.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
def forward(self, x):
# pre-context
avg_x = torch.nn.functional.adaptive_avg_pool2d(x, output_size=1)
avg_x = self.pre_context(avg_x)
avg_x = avg_x.expand_as(x)
x = x + avg_x
# switch
avg_x = torch.nn.functional.pad(x, pad=(2, 2, 2, 2), mode="reflect")
avg_x = torch.nn.functional.avg_pool2d(avg_x, kernel_size=5, stride=1, padding=0)
switch = self.switch(avg_x)
# sac
weight = self._get_weight(self.weight)
out_s = super()._conv_forward(x, weight, None)
ori_p = self.padding
ori_d = self.dilation
self.padding = tuple(3 * p for p in self.padding)
self.dilation = tuple(3 * d for d in self.dilation)
weight = weight + self.weight_diff
out_l = super()._conv_forward(x, weight, None)
out = switch * out_s + (1 - switch) * out_l
self.padding = ori_p
self.dilation = ori_d
# post-context
avg_x = torch.nn.functional.adaptive_avg_pool2d(out, output_size=1)
avg_x = self.post_context(avg_x)
avg_x = avg_x.expand_as(out)
out = out + avg_x
return self.act(self.bn(out))
二、添加【SAConv】卷积
2.1STEP1
首先找到ultralytics/nn文件路径下新建一个Add-module的python文件包【这里注意一定是python文件包,新建后会自动生成_init_.py】,如果已经跟着我的教程建立过一次了可以省略此步骤 ,随后新建一个SAConv.py文件并将上文中提到的注意力机制的代码全部粘贴到此文件中,如下图所示
2.2STEP2
在STEP1中新建的_init_.py文件中导入增加改进模块的代码包如下图所示
2.3STEP3
找到ultralytics/nn文件夹中的task.py文件,在其中按照下图添加
2.4STEP4
定位到ultralytics/nn文件夹中的task.py文件中的def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)函数添加如图代码,【如果不好定位可以直接ctrl+f搜索定位】
三、yaml文件与运行
3.1yaml文件
以下是添加【SAConv】卷积在Backbone中的yaml文件,大家可以注释自行调节,效果以自己的数据集结果为准
python
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
# YOLO11n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, SAConv2d, [128,3,2]] # 1-P2/4
- [-1, 2, C3k2, [256, False, 0.25]]
- [-1, 1, SAConv2d, [256,3,2]] # 3-P3/8
- [-1, 2, C3k2, [512, False, 0.25]]
- [-1, 1, SAConv2d, [512,3,2]] # 5-P4/16
- [-1, 2, C3k2, [512, True]]
- [-1, 1, SAConv2d, [1024,3,2]] # 7-P5/32
- [-1, 2, C3k2, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
- [-1, 2, C2PSA, [1024]] # 10
# YOLO11n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 2, C3k2, [512, False]] # 13
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 13], 1, Concat, [1]] # cat head P4
- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 10], 1, Concat, [1]] # cat head P5
- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)
以上添加位置仅供参考,具体添加位置以及模块效果以自己的数据集结果为准
3.2运行成功截图
OK 以上就是添加【SAConv】卷积的全部过程了,后续将持续更新尽情期待