每日Attention学习3——Cross-level Feature Fusion

模块出处

[link] [code] [PR 23] Cross-level Feature Aggregation Network for Polyp Segmentation


模块名称

Cross-level Feature Fusion (CFF)


模块作用

双级特征融合


模块结构

模块代码
python 复制代码
import torch
import torch.nn as nn


class BasicConv2d(nn.Module):
    def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_planes, out_planes,
                              kernel_size=kernel_size, stride=stride,
                              padding=padding, dilation=dilation, bias=False)
        self.bn = nn.BatchNorm2d(out_planes)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        return x
    

class CFF(nn.Module):
    def __init__(self, in_channel1, in_channel2, out_channel):
        self.init__ = super(CFF, self).__init__()
        act_fn         = nn.ReLU(inplace=True)
                
        self.layer0    = BasicConv2d(in_channel1, out_channel // 2, 1)
        self.layer1    = BasicConv2d(in_channel2, out_channel // 2, 1)
        
        self.layer3_1  = nn.Sequential(nn.Conv2d(out_channel, out_channel // 2, kernel_size=3, stride=1, padding=1),  nn.BatchNorm2d(out_channel // 2),act_fn)
        self.layer3_2  = nn.Sequential(nn.Conv2d(out_channel, out_channel // 2, kernel_size=3, stride=1, padding=1),  nn.BatchNorm2d(out_channel // 2),act_fn)
        
        self.layer5_1  = nn.Sequential(nn.Conv2d(out_channel, out_channel // 2, kernel_size=5, stride=1, padding=2),  nn.BatchNorm2d(out_channel // 2),act_fn)
        self.layer5_2  = nn.Sequential(nn.Conv2d(out_channel, out_channel // 2, kernel_size=5, stride=1, padding=2),  nn.BatchNorm2d(out_channel // 2),act_fn)
        
        self.layer_out = nn.Sequential(nn.Conv2d(out_channel // 2, out_channel, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(out_channel),act_fn)


    def forward(self, x0, x1):
        x0_1  = self.layer0(x0)
        x1_1  = self.layer1(x1)
        x_3_1 = self.layer3_1(torch.cat((x0_1,  x1_1),  dim=1))    
        x_5_1 = self.layer5_1(torch.cat((x1_1,  x0_1),  dim=1))
        x_3_2 = self.layer3_2(torch.cat((x_3_1, x_5_1), dim=1))
        x_5_2 = self.layer5_2(torch.cat((x_5_1, x_3_1), dim=1))
        out   = self.layer_out(x0_1 + x1_1 + torch.mul(x_3_2, x_5_2))
        return out
    
if __name__ == '__main__':
    x1 = torch.randn([1, 256, 16, 16])
    x2 = torch.randn([1, 512, 16, 16])
    cff = CFF(in_channel1=256, in_channel2=512, out_channel=64)
    out = cff(x1, x2)
    print(out.shape)  # 1, 64, 16, 16

原文表述

利用特征提取网络可以获得不同分辨率的多级特征。因此,有效整合多级特征非常重要,这可以提高不同尺度特征的表示能力。因此,我们提出了一个 CFF模块来融合相邻的两个特征,然后将其输入分割网络。

相关推荐
何大春8 小时前
【弱监督语义分割】Self-supervised Image-specific Prototype Exploration for WSSS 论文阅读
论文阅读·人工智能·python·深度学习·论文笔记·原型模式
Bearnaise2 天前
GaussianDreamer: Fast Generation from Text to 3D Gaussians——点云论文阅读(11)
论文阅读·人工智能·python·深度学习·opencv·计算机视觉·3d
PD我是你的真爱粉2 天前
Quality minus junk论文阅读
论文阅读
regret~3 天前
【论文笔记】LoFLAT: Local Feature Matching using Focused Linear Attention Transformer
论文阅读·深度学习·transformer
Maker~3 天前
23、论文阅读:基于多分辨率特征学习的层次注意力聚合GAN水下图像增强
论文阅读·学习·生成对抗网络
Q_yt3 天前
【图像压缩感知】论文阅读:Content-Aware Scalable Deep Compressed Sensing
论文阅读
江海寄3 天前
[论文阅读] 异常检测 Deep Learning for Anomaly Detection: A Review(三)总结梳理-疑点记录
论文阅读·人工智能·深度学习·机器学习·计算机视觉·语言模型·视觉检测
江海寄3 天前
[论文阅读] 异常检测 Deep Learning for Anomaly Detection: A Review (四)三种分类方法对比
论文阅读·人工智能·深度学习·机器学习·计算机视觉·分类
代码太难敲啊喂3 天前
【Anomaly Detection论文阅读记录】Resnet网络与WideResNet网络
论文阅读·人工智能
YunTM3 天前
革新预测领域:频域融合时间序列预测,深度学习新篇章,科研涨点利器
论文阅读·人工智能·深度学习