| Year | Name | Area | model | description | drawback |
|---|---|---|---|---|---|
| 2021 ICML | Clip (Contrastive Language-Image Pre-training) | contrastive learning、zero-shot learing、mutimodel | ![]() |
用文本作为监督信号来训练可迁移的视觉模型 | CLIP's zero-shot performance, although comparable to supervised ResNet50, is not yet SOTA, and the authors estimate that to achieve SOTA, CLIP would need to add 1000x more computation, which is unimaginable;CLIP's zero-shot performs poorly on certain datasets, such as fine-grained classification, abstraction tasks, etc; CLIP performs robustly on natural distribution drift, but still suffers from out-of-domain generalisation, i.e., if the distribution of the test dataset differs significantly from the training set, CLIP will perform poorly; CLIP does not address the data inefficiency challenges of deep learning, and training CLIP requires a large amount of data; |
| 2021 ICLR | ViT (VisionTransformer) | ![]() |
将Transformer应用到vision中:simple, efficient,scalable | 当拥有足够多的数据进行预训练的时候,ViT的表现就会超过CNN,突破transformer缺少归纳偏置的限制,可以在下游任务中获得较好的迁移效果 | |
| 2022 | DALL-E | 基于文本来生成模型 | |||
| 2021 ICCV | Swin Transformer | ![]() |
使用滑窗和层级式的结构,解决transformer计算量大的问题;披着Transformer皮的CNN | ||
| 2021 | MAE(Masked Autoencoders) | self-supervised | ![]() |
CV版的bert | scalablel;very high-capacity models that generalize well |
| TransMed: Transformers Advance Multi-modal Medical Image Classification | ![]() |
||||
| I3D | |||||
| 2021 | Pathway | ||||
| 2021 ICML | VILT | 视觉文本多模态Transformer | 性能不高 推理时间快 训练时间特别慢 | ||
| 2021 NeurIPS | ALBEF | align before fusion 为了清理noisy data 提出用一个momentum model生成pseudo target |
CV论文阅读大合集
幸运的小菜鸟2023-11-02 22:12
相关推荐
数智工坊1 小时前
【VAE 论文阅读| ICLR 2014】:变分自编码器——深度生成模型的理论基石数智工坊1 天前
【ECNDNet论文阅读|图像去噪经典】:融合残差、BN与空洞卷积的增强型去噪网络数智工坊2 天前
【RT-DETR论文阅读】:首个实时端到端Transformer检测器,DETR正式超越YOLO数智工坊2 天前
【Transfer CLIP论文阅读】跨模态大模型赋能!CLIP迁移学习实现超强泛化图像去噪小马哥crazymxm2 天前
arXiv论文周选 (2026-W18)薛定e的猫咪3 天前
强化学习中的OOD检测:从状态异常到分布偏移Chunyyyen3 天前
【第四十三周】论文阅读数智工坊4 天前
【Mask2Former论文阅读】:基于掩码注意力的通用分割Transformer,大一统全景/实例/语义分割大模型最新论文速读4 天前
Select to Think:蒸馏 token 排序能力,效果平均提升24%chnyi6_ya4 天前
论文笔记 | RefineAnything:面向完美局部细节的多模态区域精细化



