整理了197个经典SOTA模型,涵盖图像分类、目标检测、推荐系统等13个方向

今天来帮大家回顾一下计算机视觉、自然语言处理等热门研究领域的197个经典SOTA模型,涵盖了图像分类、图像生成、文本分类、强化学习、目标检测、推荐系统、语音识别等13个细分方向。建议大家收藏了慢慢看,下一篇顶会的idea这就来了~

由于整理的SOTA模型有点多,这里只做简单分享,全部论文以及项目源码看文末

一、图像分类SOTA模型(15个)

1.模型:AlexNet

论文题目:Imagenet Classification with Deep Convolution Neural Network

2.模型:VGG

论文题目:Very Deep Convolutional Networks for Large-Scale Image Recognition

3.模型:GoogleNet

论文题目:Going Deeper with Convolutions

4.模型:ResNet

论文题目:Deep Residual Learning for Image Recognition

5.模型:ResNeXt

论文题目:Aggregated Residual Transformations for Deep Neural Networks

6.模型:DenseNet

论文题目:Densely Connected Convolutional Networks

7.模型:MobileNet

论文题目:MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

8.模型:SENet

论文题目:Squeeze-and-Excitation Networks

9.模型:DPN

论文题目:Dual Path Networks

10.模型:IGC V1

论文题目:Interleaved Group Convolutions for Deep Neural Networks

11.模型:Residual Attention Network

论文题目:Residual Attention Network for Image Classification

12.模型:ShuffleNet

论文题目:ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

13.模型:MnasNet

论文题目:MnasNet: Platform-Aware Neural Architecture Search for Mobile

14.模型:EfficientNet

论文题目:EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

15.模型:NFNet

论文题目:MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applic

二、文本分类SOTA模型(12个)

1.模型:RAE

论文题目:Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions

2.模型:DAN

论文题目:Deep Unordered Composition Rivals Syntactic Methods for Text Classification

3.模型:TextRCNN

论文题目:Recurrent Convolutional Neural Networks for Text Classification

4.模型:Multi-task

论文题目:Recurrent Neural Network for Text Classification with Multi-Task Learning

5.模型:DeepMoji

论文题目:Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm

6.模型:RNN-Capsule

论文题目:Investigating Capsule Networks with Dynamic Routing for Text Classification

7.模型:TextCNN

论文题目:Convolutional neural networks for sentence classification

8.模型:DCNN

论文题目:A convolutional neural network for modelling sentences

9.模型:XML-CNN

论文题目:Deep learning for extreme multi-label text classification

10.模型:TextCapsule

论文题目:Investigating capsule networks with dynamic routing for text classification

11.模型:Bao et al.

论文题目:Few-shot Text Classification with Distributional Signatures

12.模型:AttentionXML

论文题目:AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification

三、文本摘要SOTA模型(17个)

1.模型:CopyNet

论文题目:Incorporating Copying Mechanism in Sequence-to-Sequence Learning

2.模型:SummaRuNNer

论文题目:SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documen

3.模型:SeqGAN

论文题目:SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

4.模型:Latent Extractive

论文题目:Neural latent extractive document summarization

5.模型:NEUSUM

论文题目:Neural Document Summarization by Jointly Learning to Score and Select Sentences

6.模型:BERTSUM

论文题目:Text Summarization with Pretrained Encoders

7.模型:BRIO

论文题目:BRIO: Bringing Order to Abstractive Summarization

8.模型:NAM

论文题目:A Neural Attention Model for Abstractive Sentence Summarization

9.模型:RAS

论文题目:Abstractive Sentence Summarization with Attentive Recurrent Neural Networks

10.模型:PGN

论文题目:Get To The Point: Summarization with Pointer-Generator Networks

11.模型:Re3Sum

论文题目:Retrieve, rerank and rewrite: Soft template based neural summarization

12.模型:MTLSum

论文题目:Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation

13.模型:KGSum

论文题目:Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization

14.模型:PEGASUS

论文题目:PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

15.模型:FASum

论文题目:Enhancing Factual Consistency of Abstractive Summarization

16.模型:RNN(ext) + ABS + RL + Rerank

论文题目:Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting

17.模型:BottleSUM

论文题目:BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle

四、图像生成SOTA模型(16个)

  1. Progressive Growing of GANs for Improved Quality, Stability, and Variation

  2. A Style-Based Generator Architecture for Generative Adversarial Networks

  3. Analyzing and Improving the Image Quality of StyleGAN

  4. Alias-Free Generative Adversarial Networks

  5. Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

  6. A Contrastive Learning Approach for Training Variational Autoencoder Priors

  7. StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets

  8. Diffusion-GAN: Training GANs with Diffusion

  9. Improved Training of Wasserstein GANs

  10. Self-Attention Generative Adversarial Networks

  11. Large Scale GAN Training for High Fidelity Natural Image Synthesis

  12. CSGAN: Cyclic-Synthesized Generative Adversarial Networks for Image-to-Image Transformation

  13. LOGAN: Latent Optimisation for Generative Adversarial Networks

  14. A U-Net Based Discriminator for Generative Adversarial Networks

  15. Instance-Conditioned GAN

  16. Conditional GANs with Auxiliary Discriminative Classifier

五、视频生成SOTA模型(15个)

  1. Temporal Generative Adversarial Nets with Singular Value Clipping

  2. Generating Videos with Scene Dynamics

  3. MoCoGAN: Decomposing Motion and Content for Video Generation

  4. Stochastic Video Generation with a Learned Prior

  5. Video-to-Video Synthesis

  6. Probabilistic Video Generation using Holistic Attribute Control

  7. ADVERSARIAL VIDEO GENERATION ON COMPLEX DATASETS

  8. Sliced Wasserstein Generative Models

  9. Train Sparsely, Generate Densely: Memory-efficient Unsupervised Training of High-resolution Temporal GAN

  10. Latent Neural Differential Equations for Video Generation

  11. VideoGPT: Video Generation using VQ-VAE and Transformers

  12. Diverse Video Generation using a Gaussian Process Trigger

  13. NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion

  14. StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2

  15. Video Diffusion Models

六、强化学习SOTA模型(13个)

  1. Playing Atari with Deep Reinforcement Learning

  2. Deep Reinforcement Learning with Double Q-learning

  3. Continuous control with deep reinforcement learning

  4. Asynchronous Methods for Deep Reinforcement Learning

  5. Proximal Policy Optimization Algorithms

  6. Hindsight Experience Replay

  7. Emergence of Locomotion Behaviours in Rich Environments

  8. ImplicitQuantile Networks for Distributional Reinforcement Learning

  9. Imagination-Augmented Agents for Deep Reinforcement Learning

  10. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning

  11. Model-based value estimation for efficient model-free reinforcement learning

  12. Model-ensemble trust-region policy optimization

  13. Dynamic Horizon Value Estimation for Model-based Reinforcement Learning

七、语音合成SOTA模型(19个)

  1. TTS Synthesis with Bidirectional LSTM based Recurrent Neural Networks

  2. WaveNet: A Generative Model for Raw Audio

  3. SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

  4. Char2Wav: End-to-end speech synthesis

  5. Deep Voice: Real-time Neural Text-to-Speech

  6. Parallel WaveNet: Fast High-Fidelity Speech Synthesis

  7. Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework

  8. Tacotron: Towards End-to-End Speech Synthesis

  9. VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop

  10. Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions

  11. Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis

  12. Deep Voice 3: Scaling text-to-speech with convolutional sequence learning

  13. ClariNet Parallel Wave Generation in End-to-End Text-to-Speech

  14. LPCNET: IMPROVING NEURAL SPEECH SYNTHESIS THROUGH LINEAR PREDICTION

  15. Neural Speech Synthesis with Transformer Network

  16. Glow-TTS:A Generative Flow for Text-to-Speech via Monotonic Alignment Search

  17. FLOW-TTS: A NON-AUTOREGRESSIVE NETWORK FOR TEXT TO SPEECH BASED ON FLOW

  18. Conditional variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

  19. PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS

八、机器翻译SOTA模型(18个)

  1. Neural machine translation by jointly learning to align and translate

  2. Multi-task Learning for Multiple Language Translation

  3. Effective Approaches to Attention-based Neural Machine Translation

  4. A Convolutional Encoder Model for Neural Machine Translation

  5. Attention is All You Need

  6. Decoding with Value Networks for Neural Machine Translation

  7. Unsupervised Neural Machine Translation

  8. Phrase-based & Neural Unsupervised Machine Translation

  9. Addressing the Under-translation Problem from the Entropy Perspective

  10. Modeling Coherence for Discourse Neural Machine Translation

  11. Cross-lingual Language Model Pretraining

  12. MASS: Masked Sequence to Sequence Pre-training for Language Generation

  13. FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow

  14. Multilingual Denoising Pre-training for Neural Machine Translation

  15. Incorporating BERT into Neural Machine Translation

  16. Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information

  17. Contrastive Learning for Many-to-many Multilingual Neural Machine Translation

  18. Universal Conditional Masked Language Pre-training for Neural Machine Translation

九、文本生成SOTA模型(10个)

  1. Sequence to sequence learning with neural networks

  2. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

  3. Neural machine translation by jointly learning to align and translate

  4. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

  5. Attention is all you need

  6. Improving language understanding by generative pre-training

  7. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  8. Cross-lingual Language Model Pretraining

  9. Language Models are Unsupervised Multitask Learners

  10. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

十、语音识别SOTA模型(12个)

  1. A Neural Probabilistic Language Model

  2. Recurrent neural network based language model

  3. Lstm neural networks for language modeling

  4. Hybrid speech recognition with deep bidirectional lstm

  5. Attention is all you need

  6. Improving language understanding by generative pre- training

  7. Bert: Pre-training of deep bidirectional transformers for language understanding

  8. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

  9. Lstm neural networks for language modeling

  10. Feedforward sequential memory networks: A new structure to learn long-term dependency

  11. Convolutional, long short-term memory, fully connected deep neural networks

  12. Highway long short-term memory RNNs for distant speech recognition

十一、目标检测SOTA模型(16个)

  1. Rich feature hierarchies for accurate object detection and semantic segmentation

  2. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition

  3. Fast R-CNN

  4. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

  5. Training Region-based Object Detectors with Online Hard Example Mining

  6. R-FCN: Object Detection via Region-based Fully Convolutional Networks

  7. Mask R-CNN

  8. You Only Look Once: Unified, Real-Time Object Detection

  9. SSD: Single Shot Multibox Detector

  10. Feature Pyramid Networks for Object Detection

  11. Focal Loss for Dense Object Detection

  12. Accurate Single Stage Detector Using Recurrent Rolling Convolution

  13. CornerNet: Detecting Objects as Paired Keypoints

  14. M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network

  15. Fully Convolutional One-Stage Object Detection

  16. ObjectBox: From Centers to Boxes for Anchor-Free Object Detection

十二、推荐系统SOTA模型(18个)

  1. Learning Deep Structured Semantic Models for Web Search using Clickthrough Data

  2. Deep Neural Networks for YouTube Recommendations

  3. Self-Attentive Sequential Recommendation

  4. Graph Convolutional Neural Networks for Web-Scale Recommender Systems

  5. Learning Tree-based Deep Model for Recommender Systems

  6. Multi-Interest Network with Dynamic Routing for Recommendation at Tmall

  7. PinnerSage: Multi-Modal User Embedding Framework for Recommendations at Pinterest

  8. Eicient Non-Sampling Factorization Machines for Optimal Context-Aware Recommendation

  9. Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation

  10. Field-aware Factorization Machines for CTR Prediction

  11. Deep Learning over Multi-field Categorical Data -- A Case Study on User Response Prediction

  12. Product-based Neural Networks for User Response Prediction

  13. Wide & Deep Learning for Recommender Systems

  14. Deep & Cross Network for Ad Click Predictions

  15. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems

  16. Deep Interest Network for Click-Through Rate Prediction

  17. GateNet:Gating-Enhanced Deep Network for Click-Through Rate Prediction

  18. Package Recommendation with Intra- and Inter-Package Attention Networks

十三、超分辨率分析SOTA模型(16个)

  1. Image Super-Resolution Using Deep Convolutional Networks

  2. Deeply-Recursive Convolutional Network for Image Super-Resolution

  3. Accelerating the Super-Resolution Convolutional Neural Network

  4. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network

  5. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

  6. Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections

  7. Accurate Image Super-Resolution Using Very Deep Convolutional Networks

  8. Image super-resolution via deep recursive residual network

  9. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution

  10. Image Super-Resolution Using Very Deep Residual Channel Attention Networks

  11. Image Super-Resolution via Dual-State Recurrent Networks

  12. Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform

  13. Cascade Convolutional Neural Network for Image Super-Resolution

  14. Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining

  15. Single Image Super-Resolution via a Holistic Attention Network

  16. One-to-many Approach for Improving Super-Resolution

关注下方《学姐带你玩AI》🚀🚀🚀

回复"SOTA模型 "获取论文+代码合集

码字不易,欢迎大家点赞评论收藏!

相关推荐
angleboy81 天前
【LLM Agents体验 1】Dify框架的安装指南
人工智能·语言模型·大模型·nlp
龙的爹23331 天前
论文 | Evaluating the Robustness of Discrete Prompts
人工智能·gpt·自然语言处理·nlp·prompt·agi
Watermelo6173 天前
从模糊搜索到语义搜索的进化之路——探索 Chroma 在大模型中的应用价值
python·机器学习·搜索引擎·语言模型·自然语言处理·数据挖掘·nlp
gzroy4 天前
中文词向量质量的评估
人工智能·语言模型·nlp
SpikeKing4 天前
Vision - 开源视觉分割算法框架 Grounded SAM2 配置与推理 教程 (1)
人工智能·计算机视觉·开源·cv·groundingdino·sam2·grounded sam2
向阳12185 天前
python NLTK快速入门
python·ai·nlp
龙的爹23336 天前
论文翻译 | PROMPTAGATOR : FEW-SHOT DENSE RETRIEVAL FROM 8 EXAMPLES
人工智能·深度学习·算法·语言模型·自然语言处理·nlp·prompt
Zhank106 天前
基于milvus的多模态检索
人工智能·nlp
Sookie--7 天前
基于Pyecharts的数据可视化开发(二)调用通义千问api分析爬虫数据
人工智能·爬虫·python·信息可视化·语言模型·自然语言处理·nlp
龙的爹23337 天前
论文 | PROMPTAGATOR : FEW-SHOT DENSE RETRIEVAL FROM 8 EXAMPLES
人工智能·深度学习·算法·机器学习·语言模型·nlp·prompt