文章目录
- [Deep Residual Learning for Image Recognition(CVPR2016)](#Deep Residual Learning for Image Recognition(CVPR2016))
- [Densely Connected Convolutional Networks(CVPR2017)](#Densely Connected Convolutional Networks(CVPR2017))
- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks(ICML2019)](#EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks(ICML2019))
- [Res2Net: A New Multi-scale Backbone Architecture](#Res2Net: A New Multi-scale Backbone Architecture)
- [Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation](#Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation)
- [Contrastive Learning of Medical Visual Representations from Paired Images and Text](#Contrastive Learning of Medical Visual Representations from Paired Images and Text)
- [RegNet: Self-Regulated Network for Image Classification](#RegNet: Self-Regulated Network for Image Classification)
- [Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification(ICCV2021)](#Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification(ICCV2021))
- [Attention Gated Networks:Learning to Leverage Salient Regions in Medical Images](#Attention Gated Networks:Learning to Leverage Salient Regions in Medical Images)
- [Tensor Networks for Medical Image Classification(MIDL2020)](#Tensor Networks for Medical Image Classification(MIDL2020))
- [SKID: Self-Supervised Learning for Knee Injury Diagnosis from MRI Data](#SKID: Self-Supervised Learning for Knee Injury Diagnosis from MRI Data)
- [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](#MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications)
- [MobileNetV2: Inverted Residuals and Linear Bottlenecks(CVPR2018)](#MobileNetV2: Inverted Residuals and Linear Bottlenecks(CVPR2018))
- [VIT:An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale(ICLR2021)](#VIT:An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale(ICLR2021))
- [CSPNet: A New Backbone that can Enhance Learning Capability of CNN](#CSPNet: A New Backbone that can Enhance Learning Capability of CNN)
- [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](#Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization)
- [SIMCLR:A Simple Framework for Contrastive Learning of Visual Representations](#SIMCLR:A Simple Framework for Contrastive Learning of Visual Representations)
- [Going Deeper with Convolutions](#Going Deeper with Convolutions)
- [Squeeze-and-Excitation Networks](#Squeeze-and-Excitation Networks)
Deep Residual Learning for Image Recognition(CVPR2016)
方法
resnet经典,使网络变得更深
Densely Connected Convolutional Networks(CVPR2017)
方法
每一层之间互相连接
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks(ICML2019)
方法
相当于是在相对较小的参数下衡量最好的规模(长宽深度以及分辨率)
Res2Net: A New Multi-scale Backbone Architecture
方法
相当于是多规模
Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation
方法
我没理解错误的话相当于是保留上几步的操作的单元,类似于RNN思想
Contrastive Learning of Medical Visual Representations from Paired Images and Text
本文方法
RegNet: Self-Regulated Network for Image Classification
本文方法
可以借鉴的一个方法
Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification(ICCV2021)
方法
相当于是以AUC为目标的优化,原理就不解读了,不是很简单
代码地址
Attention Gated Networks:Learning to Leverage Salient Regions in Medical Images
本文方法
相当于就是得到一个注意力系数,这个系数是关于两张特征图的
Tensor Networks for Medical Image Classification(MIDL2020)
方法
对张量进行操作的
SKID: Self-Supervised Learning for Knee Injury Diagnosis from MRI Data
方法
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
方法
就是深度学分离卷积减少参数
MobileNetV2: Inverted Residuals and Linear Bottlenecks(CVPR2018)
方法
和一代相比,参数量减少,增加了残差
VIT:An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale(ICLR2021)
方法
来源于自然语言,不是很复杂,了解一下注意力计算就差不多了
CSPNet: A New Backbone that can Enhance Learning Capability of CNN
方法
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
本文方法
相当于就是通过梯度得到可解释性的结果
SIMCLR:A Simple Framework for Contrastive Learning of Visual Representations
本文方法
两种不同的数据增强做一个对比损失
Going Deeper with Convolutions
本文方法
Squeeze-and-Excitation Networks
方法
SE模块