8.16模型整理

文章目录

  • [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018)](#Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018))
  • [Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016)](#Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016))
  • [Wide Residual Networks(2017)](#Wide Residual Networks(2017))
  • [mixup: Beyond Empirical Risk Minimization(ICLR2018)](#mixup: Beyond Empirical Risk Minimization(ICLR2018))
  • [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](#Swin Transformer: Hierarchical Vision Transformer using Shifted Windows)
  • [Pyramid Scene Parsing Network(2017)](#Pyramid Scene Parsing Network(2017))
  • [Searching for MobileNetV3(2019)](#Searching for MobileNetV3(2019))
  • [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016)](#SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016))
  • [Identity Mappings in Deep Residual Networks(2016)](#Identity Mappings in Deep Residual Networks(2016))
  • [Aggregated Residual Transformations for Deep Neural Networks](#Aggregated Residual Transformations for Deep Neural Networks)
  • [MLP-Mixer: An all-MLP Architecture for Vision(2021)](#MLP-Mixer: An all-MLP Architecture for Vision(2021))
  • [MOCO:Momentum Contrast for Unsupervised Visual Representation Learning](#MOCO:Momentum Contrast for Unsupervised Visual Representation Learning)
  • [A ConvNet for the 2020s](#A ConvNet for the 2020s)
  • [MAE:Masked Autoencoders Are Scalable Vision Learners](#MAE:Masked Autoencoders Are Scalable Vision Learners)
  • [Xception: Deep Learning with Depthwise Separable Convolutions](#Xception: Deep Learning with Depthwise Separable Convolutions)
  • [CLIP:Learning Transferable Visual Models From Natural Language Supervision](#CLIP:Learning Transferable Visual Models From Natural Language Supervision)
  • [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](#ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices)
  • [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](#ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design)
  • [ResNeSt: Split-Attention Networks](#ResNeSt: Split-Attention Networks)

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation(ECCV2018)

方法

代码地址

DeepLabV3+结构

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning(2016)

方法



















Wide Residual Networks(2017)

方法

代码地址

我感觉是没啥变化

mixup: Beyond Empirical Risk Minimization(ICLR2018)

方法

主要看代码里面得lam和alpha

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

方法



Vit的滑动窗口版本

Pyramid Scene Parsing Network(2017)


Searching for MobileNetV3(2019)

方法

这是一篇关于网络架构搜索的文章

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size(2016)

方法

Identity Mappings in Deep Residual Networks(2016)

方法

讲了各种各样的跳跃连接分析




Aggregated Residual Transformations for Deep Neural Networks

方法



相当于就是参数减少

MLP-Mixer: An all-MLP Architecture for Vision(2021)

token混合和channel混合

MOCO:Momentum Contrast for Unsupervised Visual Representation Learning

采用不同存储结构,moco采用的是队列

A ConvNet for the 2020s

做到极致的卷积

MAE:Masked Autoencoders Are Scalable Vision Learners

类似于bert,预测mask部分,自监督学习

Xception: Deep Learning with Depthwise Separable Convolutions

方法


CLIP:Learning Transferable Visual Models From Natural Language Supervision

方法

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

方法

分组卷积并混合

ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

方法

ResNeSt: Split-Attention Networks

本文方法



相关推荐
程序员小灰2 分钟前
谷歌AI模型Gemini 3.0 Pro,已经杀疯了!
人工智能·aigc·gemini
杨浦老苏11 分钟前
AI驱动的图表生成器Next-AI-Draw.io
人工智能·docker·ai·群晖·draw.io
饭饭大王66623 分钟前
深度学习在计算机视觉中的最新进展
人工智能·深度学习·计算机视觉
John_ToDebug23 分钟前
浏览器内核的“智变”:从渲染引擎到AI原生操作系统的征途
人工智能·chrome
用户48021517024725 分钟前
Transformer 的技术层面
人工智能
std787926 分钟前
Intel Arrow Lake Refresh迎来DDR5‑7200 CUDIMM支持,提升内存兼容性
人工智能
小喵要摸鱼27 分钟前
【卷积神经网络】卷积层、池化层、全连接层
人工智能·深度学习·cnn
vvoennvv2 小时前
【Python TensorFlow】 TCN-GRU时间序列卷积门控循环神经网络时序预测算法(附代码)
python·rnn·神经网络·机器学习·gru·tensorflow·tcn
YJlio2 小时前
[编程达人挑战赛] 用 PowerShell 写了一个“电脑一键初始化脚本”:从混乱到可复制的开发环境
数据库·人工智能·电脑
RoboWizard2 小时前
PCIe 5.0 SSD有无独立缓存对性能影响大吗?Kingston FURY Renegade G5!
人工智能·缓存·电脑·金士顿