YoLo World代码块解读

MaxSigmoidAttnBlock

分别处理图像与文本特征,计算这两者的相关性,得到整个句子所有word中最大的相关性数值作为attention作用于图像特征中。

python 复制代码
class MaxSigmoidAttnBlock(nn.Module):
    """Max Sigmoid attention block."""

    def __init__(self, c1, c2, nh=1, ec=128, gc=512, scale=False):
        """Initializes MaxSigmoidAttnBlock with specified arguments."""
        super().__init__()
        self.nh = nh
        self.hc = c2 // nh
        self.ec = Conv(c1, ec, k=1, act=False) if c1 != ec else None
        self.gl = nn.Linear(gc, ec)
        self.bias = nn.Parameter(torch.zeros(nh))
        self.proj_conv = Conv(c1, c2, k=3, s=1, act=False)
        self.scale = nn.Parameter(torch.ones(1, nh, 1, 1)) if scale else 1.0

    def forward(self, x, guide):
        """Forward process."""
        bs, _, h, w = x.shape

        guide = self.gl(guide)
        guide = guide.view(bs, -1, self.nh, self.hc)
        embed = self.ec(x) if self.ec is not None else x
        embed = embed.view(bs, self.nh, self.hc, h, w)

        aw = torch.einsum("bmchw,bnmc->bmhwn", embed, guide) #文本与图像的交叉注意力
        aw = aw.max(dim=-1)[0] #整个句子最大相关性数值作为attention作用于图像特征中。
        aw = aw / (self.hc**0.5)
        aw = aw + self.bias[None, :, None, None]
        aw = aw.sigmoid() * self.scale

        x = self.proj_conv(x)
        x = x.view(bs, self.nh, -1, h, w)
        x = x * aw.unsqueeze(2)
        return x.view(bs, -1, h, w)

WorldDetect

基本和Detect一样,主要是分类分支需要通过BNContrastiveHead将图像特征于文字特征计算相关性得到每个grid中所有类别的置信度。

python 复制代码
class WorldDetect(Detect):
    def __init__(self, nc=80, embed=512, with_bn=False, ch=()):
        """Initialize YOLOv8 detection layer with nc classes and layer channels ch."""
        super().__init__(nc, ch)
        c3 = max(ch[0], min(self.nc, 100))
        self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, embed, 1)) for x in ch)
        self.cv4 = nn.ModuleList(BNContrastiveHead(embed) if with_bn else ContrastiveHead() for _ in ch)

    def forward(self, x, text):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv4[i](self.cv3[i](x[i]), text)), 1)
            #cv4可以获得类别编码,[512,80,80]*[4,512]=[4,80,80]
相关推荐
吾名招财2 小时前
yolov5-7.0模型DNN加载函数及参数详解(重要)
c++·人工智能·yolo·dnn
FL16238631292 小时前
[深度学习][python]yolov11+bytetrack+pyqt5实现目标追踪
深度学习·qt·yolo
FL16238631294 小时前
[C++]使用纯opencv部署yolov11旋转框目标检测
opencv·yolo·目标检测
weixin_466485114 小时前
Yolov8分类检测记录
yolo·分类·数据挖掘
FL16238631297 小时前
[C++]使用纯opencv部署yolov11-pose姿态估计onnx模型
c++·opencv·yolo
给算法爸爸上香12 小时前
YOLOv11尝鲜测试五分钟极简配置
yolo·目标检测·yolov11
如果能为勤奋颁奖1 天前
YOLO11改进|卷积篇|引入可变核卷积AKConv
yolo
GIS潮流1 天前
遥感影像-实例分割数据集:iSAID 从切图到YOLO格式数据集制作详细介绍
yolo
FL16238631292 天前
[数据集][目标检测]辣椒缺陷检测数据集VOC+YOLO格式695张5类别
人工智能·yolo·目标检测
FL16238631292 天前
[深度学习][python]yolov11+deepsort+pyqt5实现目标追踪
人工智能·yolo