8.16模型整理

文章目录

  • [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018)](#Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018))
  • [Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016)](#Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016))
  • [Wide Residual Networks(2017)](#Wide Residual Networks(2017))
  • [mixup: Beyond Empirical Risk Minimization(ICLR2018)](#mixup: Beyond Empirical Risk Minimization(ICLR2018))
  • [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](#Swin Transformer: Hierarchical Vision Transformer using Shifted Windows)
  • [Pyramid Scene Parsing Network(2017)](#Pyramid Scene Parsing Network(2017))
  • [Searching for MobileNetV3(2019)](#Searching for MobileNetV3(2019))
  • [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016)](#SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016))
  • [Identity Mappings in Deep Residual Networks(2016)](#Identity Mappings in Deep Residual Networks(2016))
  • [Aggregated Residual Transformations for Deep Neural Networks](#Aggregated Residual Transformations for Deep Neural Networks)
  • [MLP-Mixer: An all-MLP Architecture for Vision(2021)](#MLP-Mixer: An all-MLP Architecture for Vision(2021))
  • [MOCO:Momentum Contrast for Unsupervised Visual Representation Learning](#MOCO:Momentum Contrast for Unsupervised Visual Representation Learning)
  • [A ConvNet for the 2020s](#A ConvNet for the 2020s)
  • [MAE:Masked Autoencoders Are Scalable Vision Learners](#MAE:Masked Autoencoders Are Scalable Vision Learners)
  • [Xception: Deep Learning with Depthwise Separable Convolutions](#Xception: Deep Learning with Depthwise Separable Convolutions)
  • [CLIP:Learning Transferable Visual Models From Natural Language Supervision](#CLIP:Learning Transferable Visual Models From Natural Language Supervision)
  • [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](#ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices)
  • [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](#ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design)
  • [ResNeSt: Split-Attention Networks](#ResNeSt: Split-Attention Networks)

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018)

方法

代码地址

DeepLabV3+结构

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016)

方法



















Wide Residual Networks(2017)

方法

代码地址

我感觉是没啥变化

mixup: Beyond Empirical Risk Minimization(ICLR2018)

方法

主要看代码里面得lam和alpha

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

方法



Vit的滑动窗口版本

Pyramid Scene Parsing Network(2017)


Searching for MobileNetV3(2019)

方法

这是一篇关于网络架构搜索的文章

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016)

方法

Identity Mappings in Deep Residual Networks(2016)

方法

讲了各种各样的跳跃连接分析




Aggregated Residual Transformations for Deep Neural Networks

方法



相当于就是参数减少

MLP-Mixer: An all-MLP Architecture for Vision(2021)

token混合和channel混合

MOCO:Momentum Contrast for Unsupervised Visual Representation Learning

采用不同存储结构,moco采用的是队列

A ConvNet for the 2020s

做到极致的卷积

MAE:Masked Autoencoders Are Scalable Vision Learners

类似于bert,预测mask部分,自监督学习

Xception: Deep Learning with Depthwise Separable Convolutions

方法


CLIP:Learning Transferable Visual Models From Natural Language Supervision

方法

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

方法

分组卷积并混合

ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

方法

ResNeSt: Split-Attention Networks

本文方法



相关推荐
知来者逆8 分钟前
计算机视觉——MedSAM2医学影像一键实现3D与视频分割的高效解决方案
人工智能·深度学习·计算机视觉·图像分割·智能医疗·万物分割
强化学习与机器人控制仿真17 分钟前
openpi 入门教程
开发语言·人工智能·python·深度学习·神经网络·机器人·自动驾驶
璇转的鱼24 分钟前
Stable Diffusion进阶之Controlnet插件使用
人工智能·ai作画·stable diffusion·aigc·ai绘画
不是吧这都有重名33 分钟前
[论文阅读]Deeply-Supervised Nets
论文阅读·人工智能·算法·大语言模型
AIWritePaper智能写作探索1 小时前
高质量学术引言如何妙用ChatGPT?如何写提示词?
人工智能·chatgpt·prompt·智能写作·aiwritepaper·引言
正宗咸豆花1 小时前
RNN(循环神经网络)原理与结构
人工智能·rnn·深度学习
luck_me51 小时前
K8S已经成为了Ai应用运行的平台工具
人工智能·容器·kubernetes
风亦辰7391 小时前
神经网络是如何工作的
人工智能·深度学习·神经网络
白杆杆红伞伞1 小时前
02_线性模型(回归分类模型)
分类·数据挖掘·回归
天上路人1 小时前
采用AI神经网络降噪算法的通信语音降噪(ENC)模组性能测试和应用
人工智能·神经网络·算法