【综述】Diffusion Models: A Comprehensive Survey of Methods and Applications

Diffusion Models: A Comprehensive Survey of Methods and Applications

论文:https://arxiv.org/abs/2209.00796

github:https://github.com/YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy

目录

[Diffusion Models: A Comprehensive Survey of Methods and Applications](#Diffusion Models: A Comprehensive Survey of Methods and Applications)

[Algorithm Taxonomy](#Algorithm Taxonomy)

[1. Efficient Sampling](#1. Efficient Sampling)

[1.1 Learning-Free Sampling](#1.1 Learning-Free Sampling)

[1.1.1 SDE Solver](#1.1.1 SDE Solver)

[1.1.2 ODE Solver](#1.1.2 ODE Solver)

[1.2 Learning-Based Sampling](#1.2 Learning-Based Sampling)

[1.2.1 Optimized Discretization](#1.2.1 Optimized Discretization)

[1.2.2 Knowledge Distillation](#1.2.2 Knowledge Distillation)

[1.2.3 Truncated Diffusion](#1.2.3 Truncated Diffusion)

[2. Improved Likelihood](#2. Improved Likelihood)

[2.1. Noise Schedule Optimization](#2.1. Noise Schedule Optimization)

[2.2. Reverse Variance Learning](#2.2. Reverse Variance Learning)

[2.3. Exact Likelihood Computation](#2.3. Exact Likelihood Computation)

[3. Data with Special Structures](#3. Data with Special Structures)

[3.1. Data with Manifold Structures](#3.1. Data with Manifold Structures)

[3.1.1 Known Manifolds](#3.1.1 Known Manifolds)

[3.1.2 Learned Manifolds](#3.1.2 Learned Manifolds)

[3.2. Data with Invariant Structures](#3.2. Data with Invariant Structures)

[3.3 Discrete Data](#3.3 Discrete Data)

[Application Taxonomy](#Application Taxonomy)

[1. Computer Vision](#1. Computer Vision)

[2. Natural Language Processing](#2. Natural Language Processing)

[3. Temporal Data Modeling](#3. Temporal Data Modeling)

[4. Multi-Modal Learning](#4. Multi-Modal Learning)

[5. Robust Learning](#5. Robust Learning)

[6. Molecular Graph Modeling](#6. Molecular Graph Modeling)

[7. Material Design](#7. Material Design)

[8. Medical Image Reconstruction](#8. Medical Image Reconstruction)

[Connections with Other Generative Models](#Connections with Other Generative Models)

[1. Variational Autoencoder](#1. Variational Autoencoder)

[2. Generative Adversarial Network](#2. Generative Adversarial Network)

[3. Normalizing Flow](#3. Normalizing Flow)

[4. Autoregressive Models](#4. Autoregressive Models)

[5. Energy-Based Models](#5. Energy-Based Models)


Algorithm Taxonomy

1. Efficient Sampling

1.1 Learning-Free Sampling
1.1.1 SDE Solver

Score-Based Generative Modeling through Stochastic Differential Equations

Adversarial score matching and improved sampling for image generation

Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction

Score-Based Generative Modeling with Critically-Damped Langevin Diffusion

Gotta Go Fast When Generating Data with Score-Based Models

Elucidating the Design Space of Diffusion-Based Generative Models

Generative modeling by estimating gradients of the data distribution

1.1.2 ODE Solver

Denoising Diffusion Implicit Models

gDDIM: Generalized denoising diffusion implicit models

Elucidating the Design Space of Diffusion-Based Generative Models

DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Step

Pseudo Numerical Methods for Diffusion Models on Manifolds

Fast Sampling of Diffusion Models with Exponential Integrator

Poisson flow generative models

1.2 Learning-Based Sampling
1.2.1 Optimized Discretization

Learning to Efficiently Sample from Diffusion Probabilistic Models

GENIE: Higher-Order Denoising Diffusion Solvers

Learning fast samplers for diffusion models by differentiating through sample quality

1.2.2 Knowledge Distillation

Progressive Distillation for Fast Sampling of Diffusion Models

Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed

1.2.3 Truncated Diffusion

Accelerating Diffusion Models via Early Stop of the Diffusion Process

Truncated Diffusion Probabilistic Models

2. Improved Likelihood

2.1. Noise Schedule Optimization

Improved denoising diffusion probabilistic models

Variational diffusion models

2.2. Reverse Variance Learning

Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models

Improved denoising diffusion probabilistic models

Stable Target Field for Reduced Variance Score Estimation in Diffusion Models

2.3. Exact Likelihood Computation

Score-Based Generative Modeling through Stochastic Differential Equations

Maximum likelihood training of score-based diffusion models

A variational perspective on diffusion-based generative models and score matching

Score-Based Generative Modeling through Stochastic Differential Equations

Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching

Maximum Likelihood Training of Implicit Nonlinear Diffusion Models

3. Data with Special Structures

3.1. Data with Manifold Structures
3.1.1 Known Manifolds

Riemannian Score-Based Generative Modeling

Riemannian Diffusion Models

3.1.2 Learned Manifolds

Score-based generative modeling in latent space

Diffusion priors in variational autoencoders

Hierarchical text-conditional image generation with clip latents

High-resolution image synthesis with latent diffusion models

3.2. Data with Invariant Structures

GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation

Permutation invariant graph generation via score-based generative modeling

Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations

DiGress: Discrete Denoising diffusion for graph generation

Learning gradient fields for molecular conformation generation

Graphgdp: Generative diffusion processes for permutation invariant graph generation

SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation

3.3 Discrete Data

Vector quantized diffusion model for text-to-image synthesis

Structured Denoising Diffusion Models in Discrete State-Spaces

Vector Quantized Diffusion Model with CodeUnet for Text-to-Sign Pose Sequences Generation

Deep Unsupervised Learning using Non equilibrium Thermodynamics.

A Continuous Time Framework for Discrete Denoising Models

Application Taxonomy

1. Computer Vision

2. Natural Language Processing

3. Temporal Data Modeling

4. Multi-Modal Learning

5. Robust Learning

6. Molecular Graph Modeling

7. Material Design

8. Medical Image Reconstruction

Connections with Other Generative Models

1. Variational Autoencoder

2. Generative Adversarial Network

3. Normalizing Flow

4. Autoregressive Models

5. Energy-Based Models

相关推荐
爱笑的眼睛112 分钟前
SQLAlchemy 核心 API 深度解析:超越 ORM 的数据库工具包
java·人工智能·python·ai
知白守黑V7 分钟前
OWASP 2025 LLM 应用十大安全风险深度解析
人工智能·安全·ai agent·ai智能体·ai应用·ai安全·大模型安全
zhaodiandiandian8 分钟前
生成式AI重构内容创作生态:人机协同成核心竞争力
大数据·人工智能·重构
努力毕业的小土博^_^13 分钟前
【AI课程领学】基于SmolVLM2与Qwen3的多模态模型拼接实践:从零构建视觉语言模型(一)
人工智能·深度学习·神经网络·机器学习·语言模型·自然语言处理
Lululaurel17 分钟前
AI编程提示词工程实战指南:从入门到精通
人工智能·python·机器学习·ai·ai编程
JOYCE_Leo1619 分钟前
Learning Diffusion Texture Priors for Image Restoration(DTPM)-CVPR2024
深度学习·扩散模型·图像复原
财经三剑客29 分钟前
东风集团股份:11月生产量达21.6万辆 销量19.6万辆
大数据·人工智能·汽车
老蒋新思维32 分钟前
创客匠人峰会新解:高势能 IP 打造 ——AI 时代知识变现的十倍增长密码
大数据·网络·人工智能·tcp/ip·创始人ip·创客匠人·知识变现
Dev7z34 分钟前
基于神经网络的风电机组齿轮箱故障诊断研究与设计
人工智能·深度学习·神经网络
老蒋新思维34 分钟前
创客匠人峰会洞察:AI 时代教育知识变现的重构 —— 从 “刷题记忆” 到 “成长赋能” 的革命
大数据·人工智能·网络协议·tcp/ip·重构·创始人ip·创客匠人