| Year | Name | Area | model | description | drawback |
|---|---|---|---|---|---|
| 2021 ICML | Clip (Contrastive Language-Image Pre-training) | contrastive learning、zero-shot learing、mutimodel | ![]() |
用文本作为监督信号来训练可迁移的视觉模型 | CLIP's zero-shot performance, although comparable to supervised ResNet50, is not yet SOTA, and the authors estimate that to achieve SOTA, CLIP would need to add 1000x more computation, which is unimaginable;CLIP's zero-shot performs poorly on certain datasets, such as fine-grained classification, abstraction tasks, etc; CLIP performs robustly on natural distribution drift, but still suffers from out-of-domain generalisation, i.e., if the distribution of the test dataset differs significantly from the training set, CLIP will perform poorly; CLIP does not address the data inefficiency challenges of deep learning, and training CLIP requires a large amount of data; |
| 2021 ICLR | ViT (VisionTransformer) | ![]() |
将Transformer应用到vision中:simple, efficient,scalable | 当拥有足够多的数据进行预训练的时候,ViT的表现就会超过CNN,突破transformer缺少归纳偏置的限制,可以在下游任务中获得较好的迁移效果 | |
| 2022 | DALL-E | 基于文本来生成模型 | |||
| 2021 ICCV | Swin Transformer | ![]() |
使用滑窗和层级式的结构,解决transformer计算量大的问题;披着Transformer皮的CNN | ||
| 2021 | MAE(Masked Autoencoders) | self-supervised | ![]() |
CV版的bert | scalablel;very high-capacity models that generalize well |
| TransMed: Transformers Advance Multi-modal Medical Image Classification | ![]() |
||||
| I3D | |||||
| 2021 | Pathway | ||||
| 2021 ICML | VILT | 视觉文本多模态Transformer | 性能不高 推理时间快 训练时间特别慢 | ||
| 2021 NeurIPS | ALBEF | align before fusion 为了清理noisy data 提出用一个momentum model生成pseudo target |
CV论文阅读大合集
幸运的小菜鸟2023-11-02 22:12
相关推荐
我要学脑机7 小时前
[脑磁图+PD综述]文献阅读记录EEPI7 小时前
【论文阅读】RoboReward: General-Purpose Vision-Language Reward Models for Robotics小明_GLC8 小时前
MDMLP-EIA时序论文阅读m0_650108249 小时前
P2PNet:基于点的密集人群计数与定位m0_650108241 天前
WorldSplat:面向自动驾驶的 4D 场景生成与新颖视图合成框架小明_GLC1 天前
Timer-XL: Long-Context Transformers For Unified Time Series Forecasting 时序论文阅读小明_GLC1 天前
Improving Time Series Forecasting via Instance-aware Post-hoc Revision时序论文阅读小明_GLC1 天前
Timer : Transformers for Time Series Analysis at Scale 时序论文阅读EEPI1 天前
【论文阅读】Steering Your Diffusion Policy with Latent Space Reinforcement Learning学海一叶1 天前
论文精读-《ReAct: Synergizing Reasoning and Acting in Language Models》,2022



