目录
一、【FLA】注意力机制
1.1【FLA】注意力介绍
下图是【FLA】的结构图,让我们简单分析一下运行过程和优势,以及和Softmax Attention的对比
- Softmax Attention(左侧)
- 处理流程:
- 输入矩阵:查询矩阵 𝑄的大小为 𝑁×𝑑,键矩阵 𝐾𝑇的大小为 𝑑×𝑁,值矩阵 𝑉 的大小为 𝑁×𝑑,其中 𝑁是序列长度,𝑑是特征维度。
- 计算注意力分数:通过矩阵乘法 𝑄𝐾𝑇,得到一个大小为 𝑁×𝑁的注意力权重矩阵。
- Softmax 归一化:通过 Softmax 函数对注意力权重进行归一化。
应用到值矩阵 𝑉:然后将归一化的注意力权重乘以值矩阵 𝑉,最终输出为 𝑁×𝑑。 - 复杂度:该计算的复杂度是 𝑂(𝑁2𝑑)。其中主要的计算代价在于矩阵 𝑄𝐾𝑇 的乘法,这个操作产生一个 𝑁×𝑁的注意力矩阵。因此,当序列长度 𝑁较大时,计算开销显著增加,尤其在长序列处理时表现较差。
- Linear Attention(右侧)
-
处理流程:
-
输入矩阵:查询矩阵 𝑄的大小为 𝑁×𝑑,键矩阵 𝐾𝑇为 𝑑×𝑁,值矩阵 𝑉为 𝑁×𝑑。先计算 𝐾𝑇𝑉:与 Softmax Attention 不同,Linear Attention 先将键矩阵 𝐾𝑇和值矩阵 𝑉相乘,得到一个大小为 𝑑×𝑑的矩阵。再计算 𝑄(𝐾𝑇𝑉):然后将查询矩阵 𝑄与该 𝑑×𝑑矩阵相乘,最终得到输出为 𝑁×𝑑。
-
复杂度:
-
Linear Attention 的计算复杂度是 𝑂(𝑁𝑑2),相较于 Softmax Attention 的 𝑂(𝑁2𝑑),降低了一个 𝑁维度。这种结构的计算复杂度不再依赖于序列长度 𝑁,因此适合处理长序列任务。
1.2【FLA】核心代码
python
import torch.nn as nn
import torch
from einops import rearrange
def autopad(k, p=None, d=1): # kernel, padding, dilation
# Pad to 'same' shape outputs
if d > 1:
k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
# Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
default_act = nn.SiLU() # default activation
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
return self.act(self.conv(x))
class FocusedLinearAttention(nn.Module):
def __init__(self, dim, num_patches=64, num_heads=8, qkv_bias=True, qk_scale=None, attn_drop=0.0, proj_drop=0.0, sr_ratio=1,
focusing_factor=3.0, kernel_size=5):
super().__init__()
assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
self.dim = dim
self.num_heads = num_heads
head_dim = dim // num_heads
self.q = nn.Linear(dim, dim, bias=qkv_bias)
self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.sr_ratio = sr_ratio
if sr_ratio > 1:
self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)
self.norm = nn.LayerNorm(dim)
self.focusing_factor = focusing_factor
self.dwc = nn.Conv2d(in_channels=head_dim, out_channels=head_dim, kernel_size=kernel_size,
groups=head_dim, padding=kernel_size // 2)
self.scale = nn.Parameter(torch.zeros(size=(1, 1, dim)))
# self.positional_encoding = nn.Parameter(torch.zeros(size=(1, num_patches // (sr_ratio * sr_ratio), dim)))
def forward(self, x):
B, C, H, W = x.shape # 输入为四维:[批次大小, 通道数, 高度, 宽度]
dtype, device = x.dtype, x.device
# 调整输入以匹配原始模块的预期格式
x = rearrange(x, 'b c h w -> b (h w) c')
q = self.q(x)
if self.sr_ratio > 1:
x_ = x.permute(0, 2, 1).reshape(B, C, H, W)
x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1)
x_ = self.norm(x_)
kv = self.kv(x_).reshape(B, -1, 2, C).permute(2, 0, 1, 3)
else:
kv = self.kv(x).reshape(B, -1, 2, C).permute(2, 0, 1, 3)
k, v = kv[0], kv[1]
N = H * W # 序列长度
# 重新生成位置编码
positional_encoding = nn.Parameter(torch.zeros(size=(1, N, self.dim), device=device))
k = k + positional_encoding
focusing_factor = self.focusing_factor
kernel_function = nn.ReLU()
scale = nn.Softplus()(self.scale)
q = kernel_function(q) + 1e-6
k = kernel_function(k) + 1e-6
q = q / scale
k = k / scale
q_norm = q.norm(dim=-1, keepdim=True)
k_norm = k.norm(dim=-1, keepdim=True)
q = q ** focusing_factor
k = k ** focusing_factor
q = (q / q.norm(dim=-1, keepdim=True)) * q_norm
k = (k / k.norm(dim=-1, keepdim=True)) * k_norm
bool = False
if dtype == torch.float16:
q = q.float()
k = k.float()
v = v.float()
bool = True
q, k, v = (rearrange(x, "b n (h c) -> (b h) n c", h=self.num_heads) for x in [q, k, v])
i, j, c, d = q.shape[-2], k.shape[-2], k.shape[-1], v.shape[-1]
z = 1 / (torch.einsum("b i c, b c -> b i", q, k.sum(dim=1)) + 1e-6)
if i * j * (c + d) > c * d * (i + j):
kv = torch.einsum("b j c, b j d -> b c d", k, v)
x = torch.einsum("b i c, b c d, b i -> b i d", q, kv, z)
else:
qk = torch.einsum("b i c, b j c -> b i j", q, k)
x = torch.einsum("b i j, b j d, b i -> b i d", qk, v, z)
if self.sr_ratio > 1:
v = nn.functional.interpolate(v.permute(0, 2, 1), size=x.shape[1], mode='linear').permute(0, 2, 1)
if bool:
v = v.to(torch.float16)
x = x.to(torch.float16)
num = int(v.shape[1] ** 0.5)
feature_map = rearrange(v, "b (w h) c -> b c w h", w=num, h=num)
feature_map = rearrange(self.dwc(feature_map), "b c w h -> b (w h) c")
x = x + feature_map
x = rearrange(x, "(b h) n c -> b n (h c)", h=self.num_heads)
x = self.proj(x)
x = self.proj_drop(x)
x = rearrange(x, 'b (h w) c -> b c h w', h=H, w=W)
return x
二、添加【FLA】注意力机制
2.1STEP1
首先找到ultralytics/nn文件路径下新建一个Add-module的python文件包【这里注意一定是python文件包,新建后会自动生成_init_.py】,如果已经跟着我的教程建立过一次了可以省略此步骤 ,随后新建一个FLA.py文件并将上文中提到的注意力机制的代码全部粘贴到此文件中,如下图所示
2.2STEP2
在STEP1中新建的_init_.py文件中导入增加改进模块的代码包如下图所示
2.3STEP3
找到ultralytics/nn文件夹中的task.py文件,在其中按照下图添加
2.4STEP4
定位到ultralytics/nn文件夹中的task.py文件中的def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)函数添加如图代码,【如果不好定位可以直接ctrl+f搜索定位】
三、yaml文件与运行
3.1yaml文件
以下是添加【FLA】注意力机制在Backbone中的yaml文件,大家可以注释自行调节,效果以自己的数据集结果为准
python
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
# YOLO11n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 2, C3k2, [256, False, 0.25]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 2, C3k2, [512, False, 0.25]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 2, C3k2, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 2, C3k2, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
- [-1, 2, C2PSA, [1024]] # 10
# YOLO11n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 2, C3k2, [512, False]] # 13
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
- [-1,1,FocusedLinearAttention,[]]
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 13], 1, Concat, [1]] # cat head P4
- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 10], 1, Concat, [1]] # cat head P5
- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
- [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)
以上添加位置仅供参考,具体添加位置以及模块效果以自己的数据集结果为准
3.2运行成功截图
OK 以上就是添加【FLA】注意力机制的全部过程了,后续将持续更新尽情期待