文章目录
- YOLOv8 中添加注意力机制 CBAM 具有多方面的好处
-
- 特征增强与选择
-
- 通道注意力方面
- 空间注意力方面
- 提高模型性能
- 计算效率优化:
- yolov8增加CBAM具体步骤
-
- CBAM代码
-
- (1)在__init.py+conv.py文件的__all__内添加'CBAM'
- (2)conv.py文件复制粘贴CBAM代码
- (3)修改task.py文件
- yolov8.yaml文件增加CBAM
-
- yolov8.yaml
- yolov8.yaml增加CBAM
YOLOv8 中添加注意力机制 CBAM 具有多方面的好处
特征增强与选择
通道注意力方面
突出重要特征通道:帮助模型自动学习不同通道特征的重要性权重。对于目标检测任务,某些通道可能携带了关于目标物体的关键信息,如颜色、纹理等特征。CBAM 的通道注意力模块可以增强这些重要通道的特征表示,让模型更加关注对目标检测有价值的特征,从而提高检测的准确性。例如,在检测车辆时,颜色通道中关于车辆独特颜色的信息通道权重会被提高,有助于模型更好地识别车辆。
抑制无关特征通道:能够抑制那些对当前检测任务不太重要的通道特征,减少噪声和干扰信息的影响。这在复杂场景下尤为重要,可避免模型被背景或其他无关信息误导,提高模型的抗干扰能力。
空间注意力方面
聚焦目标位置:空间注意力模块可以让模型关注特征图中不同位置的重要性。在目标检测中,能够突出目标物体所在的位置区域,使模型更加准确地定位目标。例如,当检测人群中的特定个体时,空间注意力会将焦点集中在该个体所在的区域,减少周围人群等其他区域的干扰。
适应目标形状和大小变化:对于不同形状和大小的目标,空间注意力可以自适应地调整关注区域,更好地适应目标的变化。无论是检测小目标还是大目标,都能提高模型对目标的关注度和检测精度。
提高模型性能
精度提升:通过强调重要的特征信息,CBAM 能够帮助 YOLOv8 更准确地识别和定位目标,从而提高模型的检测精度。在一些实验和实际应用中,添加 CBAM 后的 YOLOv8 在目标检测的准确率上有显著的提升。
泛化能力增强:使模型更好地学习到数据中的关键特征,减少对特定数据分布的依赖,增强模型的泛化能力。这意味着在面对新的、未曾见过的场景或数据时,模型仍然能够保持较好的检测性能。
计算效率优化:
特征筛选减少计算量:CBAM 在增强有用特征的同时,实际上也起到了一种特征筛选的作用。模型可以更加集中地处理重要的特征信息,减少对不必要信息的计算,从而在一定程度上提高计算效率,尤其是在处理大规模图像数据或实时检测任务时,这种优势更为明显。
与 YOLOv8 结构互补:CBAM 的结构相对简单且轻量级,与 YOLOv8 的网络结构相契合。添加 CBAM 不会给模型带来过大的额外计算负担,能够在不显著增加模型复杂度的情况下提升性能。
yolov8增加CBAM具体步骤
CBAM代码
(1)在__init.py+conv.py文件的__all__内添加'CBAM'
(2)conv.py文件复制粘贴CBAM代码
bash
class ChannelAttention(nn.Module):
"""Channel-attention module https://github.com/open-mmlab/mmdetection/tree/v3.0.0rc1/configs/rtmdet."""
def __init__(self, channels: int) -> None:
"""Initializes the class and sets the basic configurations and instance variables required."""
super().__init__()
self.pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Conv2d(channels, channels, 1, 1, 0, bias=True)
self.act = nn.Sigmoid()
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Applies forward pass using activation on convolutions of the input, optionally using batch normalization."""
return x * self.act(self.fc(self.pool(x)))
class SpatialAttention(nn.Module):
"""Spatial-attention module."""
def __init__(self, kernel_size=7):
"""Initialize Spatial-attention module with kernel size argument."""
super().__init__()
assert kernel_size in (3, 7), 'kernel size must be 3 or 7'
padding = 3 if kernel_size == 7 else 1
self.cv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)
self.act = nn.Sigmoid()
def forward(self, x):
"""Apply channel and spatial attention on input for feature recalibration."""
return x * self.act(self.cv1(torch.cat([torch.mean(x, 1, keepdim=True), torch.max(x, 1, keepdim=True)[0]], 1)))
class CBAM(nn.Module):
"""Convolutional Block Attention Module."""
def __init__(self, c1, kernel_size=7):
"""Initialize CBAM with given input channel (c1) and kernel size."""
super().__init__()
self.channel_attention = ChannelAttention(c1)
self.spatial_attention = SpatialAttention(kernel_size)
def forward(self, x):
"""Applies the forward pass through C1 module."""
return self.spatial_attention(self.channel_attention(x))
(3)修改task.py文件
先引用刚导入的CBAM模块:
再配置引用CBAM模块时的计算方法:
bash
elif m is CBAM:
c1,c2=ch[f],args[0]
if c2!=nc:
c2=make_divisible(min(c2,max_channels)*width,8)
args=[c1,*args[1:]]
yolov8.yaml文件增加CBAM
yolov8.yaml
bash
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 12
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 15 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 18 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 9], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 21 (P5/32-large)
- [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)
yolov8.yaml增加CBAM
yolov8.yaml增加CBAM步骤很简单只需要在卷积模块后面写上CBAM:
bash
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 12
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 15 (P3/8-small)
- [-1, 1, CBAM, [256]]
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 18 (P4/16-medium)
- [-1, 1, CBAM, [512]]
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 9], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 21 (P5/32-large)
- [-1, 1, CBAM, [1024]]
- [[18, 21, 24], 1, Detect, [nc]] # Detect(P3, P4, P5)
从yaml文件可以看出来改进之前Neck部分的C2f模块后没有CBAM注意力机制,改进后的在最后三个C2f模块都添加了CBAM注意力机制,因此最后一层detect部分也需要增加3,结果是18, 21, 24
运行示意: