视频生成【文章汇总】SVD, Sora, Latte, VideoCrafter12, DiT...

视频生成【文章汇总】SVD, Sora, Latte, VideoCrafter12, DiT...

  • [【arXiv 2024】MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions](#【arXiv 2024】MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions)
  • [【CVPR 2024】VBench : Comprehensive Benchmark Suite for Video Generative Models](#【CVPR 2024】VBench : Comprehensive Benchmark Suite for Video Generative Models)
  • [【arxiv 2024】T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation](#【arxiv 2024】T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation)
  • [【arxiv 2024】Latte: Latent Diffusion Transformer for Video Generation](#【arxiv 2024】Latte: Latent Diffusion Transformer for Video Generation)
  • [【arxiv 2024】xxx](#【arxiv 2024】xxx)
  • [【arxiv 2024】xxx](#【arxiv 2024】xxx)
  • [【arxiv 2024】xxx](#【arxiv 2024】xxx)
  • [【arxiv 2024】xxx](#【arxiv 2024】xxx)

数据集

指标

【arXiv 2024】MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions

Authors: Xuan Ju, Yiming Gao, Zhaoyang Zhang, Ziyang Yuan, Xintao Wang, Ailing Zeng, Yu Xiong, Qiang Xu, Ying Shan
Abstract Sora's high-motion intensity and long consistent videos have significantly impacted the field of video generation, attracting unprecedented attention. However, existing publicly available datasets are inadequate for generating Sora-like videos, as they mainly contain short videos with low motion intensity and brief captions. To address these issues, we propose MiraData, a high-quality video dataset that surpasses previous ones in video duration, caption detail, motion strength, and visual quality. We curate MiraData from diverse, manually selected sources and meticulously process the data to obtain semantically consistent clips. GPT-4V is employed to annotate structured captions, providing detailed descriptions from four different perspectives along with a summarized dense caption. To better assess temporal consistency and motion intensity in video generation, we introduce MiraBench, which enhances existing benchmarks by adding 3D consistency and tracking-based motion strength metrics. MiraBench includes 150 evaluation prompts and 17 metrics covering temporal consistency, motion strength, 3D consistency, visual quality, text-video alignment, and distribution similarity. To demonstrate the utility and effectiveness of MiraData, we conduct experiments using our DiT-based video generation model, MiraDiT. The experimental results on MiraBench demonstrate the superiority of MiraData, especially in motion strength. 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】



【CVPR 2024】VBench : Comprehensive Benchmark Suite for Video Generative Models

Authors: Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, Ziwei Liu
Abstract Video generation has witnessed significant advancements, yet evaluating these models remains a challenge. A comprehensive evaluation benchmark for video generation is indispensable for two reasons: 1) Existing metrics do not fully align with human perceptions; 2) An ideal evaluation system should provide insights to inform future developments of video generation. To this end, we present VBench, a comprehensive benchmark suite that dissects "video generation quality" into specific, hierarchical, and disentangled dimensions, each with tailored prompts and evaluation methods. VBench has three appealing properties: 1) Comprehensive Dimensions: VBench comprises 16 dimensions in video generation (e.g., subject identity inconsistency, motion smoothness, temporal flickering, and spatial relationship, etc). The evaluation metrics with fine-grained levels reveal individual models' strengths and weaknesses. 2) Human Alignment: We also provide a dataset of human preference annotations to validate our benchmarks' alignment with human perception, for each evaluation dimension respectively. 3) Valuable Insights: We look into current models' ability across various evaluation dimensions, and various content types. We also investigate the gaps between video and image generation models. We will open-source VBench, including all prompts, evaluation methods, generated videos, and human preference annotations, and also include more video generation models in VBench to drive forward the field of video generation. 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读】

【arxiv 2024】T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation

Authors: Kaiyue Sun, Kaiyi Huang, Xian Liu, Yue Wu, Zihan Xu, Zhenguo Li, Xihui Liu
Abstract Text-to-video (T2V) generation models have advanced significantly, yet their ability to compose different objects, attributes, actions, and motions into a video remains unexplored. Previous text-to-video benchmarks also neglect this important ability for evaluation. In this work, we conduct the first systematic study on compositional text-to-video generation. We propose T2V-CompBench, the first benchmark tailored for compositional text-to-video generation. T2V-CompBench encompasses diverse aspects of compositionality, including consistent attribute binding, dynamic attribute binding, spatial relationships, motion binding, action binding, object interactions, and generative numeracy. We further carefully design evaluation metrics of MLLM-based metrics, detection-based metrics, and tracking-based metrics, which can better reflect the compositional text-to-video generation quality of seven proposed categories with 700 text prompts. The effectiveness of the proposed metrics is verified by correlation with human evaluations. We also benchmark various text-to-video generative models and conduct in-depth analysis across different models and different compositional categories. We find that compositional text-to-video generation is highly challenging for current models, and we hope that our attempt will shed light on future research in this direction. 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读】


【arxiv 2024】Latte: Latent Diffusion Transformer for Video Generation

Authors: Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao
Abstract We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation. 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】

【arxiv 2024】xxx

Authors:
Abstract 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】

【arxiv 2024】xxx

Authors:
Abstract 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】

【arxiv 2024】xxx

Authors:
Abstract 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】

【arxiv 2024】xxx

Authors:
Abstract 【Paper】 > 【Github_Code】 > 【Project】 > 【中文解读,待续】

相关推荐
Likeadust21 小时前
视频直播点播平台EasyDSS助力阳光政务直播体系建设
音视频·政务
xcLeigh1 天前
HTML5实现好看的视频播放器(三种风格,附源码)
前端·音视频·html5
今夕资源网1 天前
[AI工具]Infinite Talk数字人对口型图像转视频AI工具 支持无限时长视频生成
人工智能·数字人·视频生成·ai工具·infinite talk·对口型图像转视频·无限时长
骄傲的心别枯萎1 天前
RV1126 NO.57:ROCKX+RV1126人脸识别推流项目之读取人脸图片并把特征值保存到sqlite3数据库
数据库·opencv·计算机视觉·sqlite·音视频·rv1126
好游科技1 天前
IM即时通讯系统:安全可控、功能全面的社交解决方案全解析
安全·音视频·webrtc·im即时通讯·私有化部署im即时通讯·社交app
AI生成未来1 天前
ICCV`25 | 视频交互“随心所欲”!复旦&通义万相等开源DreamRelation:让想象力从此无边界
lora·视频生成·视频交互·视频理解·mm-dit
EasyDSS1 天前
视频直播点播平台EasyDSS构建高并发、低延迟的远程教学直播新模式
音视频
GIOTTO情1 天前
多模态舆情监测技术深度解析:Infoseek 如何实现 AI 造假与短视频舆情的精准捕捉?
人工智能·音视频
音视频牛哥1 天前
C# 开发工业级 RTSP/RTMP 播放器实战:基于 SmartMediakit 的低延迟与高可靠性设计
音视频·rtsp播放器·rtmp播放器·windows rtsp播放器·windows rtmp播放器·c# rtsp播放器·c# rtmp播放器
JellyDDD1 天前
【悬赏】Android WebRTC 数字人项目回声问题排查(AEC / AudioMode)
音视频·webrtc