巨擘OpenMMLab将开源综合音频生成项目:Amphion

项目地址:https://github.com/open-mmlab/Amphion

TTS: Text-to-Speech

Amphion achieves state-of-the-art performance when compared with existing open-source repositories on text-to-speech (TTS) systems. It supports the following models or architectures:

  • FastSpeech2: A non-autoregressive TTS architecture that utilizes feed-forward Transformer blocks.

  • VITS: An end-to-end TTS architecture that utilizes conditional variational autoencoder with adversarial learning

  • Vall-E: A zero-shot TTS architecture that uses a neural codec language model with discrete codes.

  • NaturalSpeech2: An architecture for TTS that utilizes a latent diffusion model to generate natural-sounding voices.

SVC: Singing Voice Conversion

  • Ampion supports multiple content-based features from various pretrained models, including WeNet, Whisper, and ContentVec. Their specific roles in SVC has been investigated in our NeurIPS 2023 workshop paper.

  • Amphion implements several state-of-the-art model architectures, including diffusion-, transformer-, VAE- and flow-based models. The diffusion-based architecture uses Bidirectional dilated CNN as a backend and supports several sampling algorithms such as DDPM, DDIM, and PNDM. Additionally, it supports single-step inference based on the Consistency Model.

TTA: Text-to-Audio

Amphion supports the TTA with a latent diffusion model. It is designed like AudioLDM Make-an-Audio and AUDIT. It is also the official implementation of the text-to-audio generation part of our NeurIPS 2023 paper.

Vocoder

  • Amphion supports various widely-used neural vocoders, including:

    • GAN-based vocoders: MelGAN, HiFi-GAN, NSF-HiFiGAN, BigVGAN, APNet.

    • Flow-based vocoders: WaveGlow.

    • Diffusion-based vocoders: Diffwave.

    • Auto-regressive based vocoders: WaveNet, WaveRNN.

  • Amphion provides the official implementation of Multi-Scale Constant-Q Transfrom Discriminator. It can be used to enhance any architecture GAN-based vocoders during training, and keep the inference stage (such as memory or speed) unchanged.

Evaluation

Amphion provides a comprehensive objective evaluation of the generated audio. The evaluation metrics contain:

  • F0 Modeling: F0 Pearson Coefficients, F0 Periodicity Root Mean Square Error, F0 Root Mean Square Error, Voiced/Unvoiced F1 Score, etc.

  • Energy Modeling: Energy Root Mean Square Error, Energy Pearson Coefficients, etc.

  • Intelligibility: Character/Word Error Rate, which can be calculated based on Whisper and more.

  • Spectrogram Distortion: Frechet Audio Distance (FAD), Mel Cepstral Distortion (MCD), Multi-Resolution STFT Distance (MSTFT), Perceptual Evaluation of Speech Quality (PESQ), Short Time Objective Intelligibility (STOI), etc.

  • Speaker Similarity: Cosine similarity, which can be calculated based on RawNet3, WeSpeaker, and more.

Datasets

Amphion unifies the data preprocess of the open-source datasets including AudioCaps, LibriTTS, LJSpeech, M4Singer, Opencpop, OpenSinger, SVCC, VCTK, and more. The supported dataset list can be seen here (updating).

📀 Installation

复制代码
git clone https://github.com/open-mmlab/Amphion.git
cd Amphion

# Install Python Environment
conda create --name amphion python=3.9.15
conda activate amphion

# Install Python Packages Dependencies
sh env.sh

🐍 Usage in Python

We detail the instructions of different tasks in the following recipes:

  • Text-to-Speech (TTS)

  • Singing Voice Conversion (SVC)

  • Text-to-Audio (TTA)

  • Vocoder

  • Evaluation

🙏 Acknowled

  • ming024's FastSpeech2 and jaywalnut310's VITS for model architecture code.

  • lifeiteng's VALL-E for training pipeline and model architecture design.

  • WeNet, Whisper, ContentVec, and RawNet3 for pretrained models and inference code.

  • HiFi-GAN for GAN-based Vocoder's architecture design and training strategy.

  • Encodec for well-organized GAN Discriminator's architecture and basic blocks.

  • Latent Diffusion for model architecture design.

  • TensorFlowTTS for preparing the MFA tools.

©️ License

Amphion is under the MIT License. It is free for both research and commercial use cases.

📚 Citations

Stay tuned, Coming soon!

相关推荐
The Straight2 小时前
【视频文件音画不同步】字幕和视频画面不同步问题
音视频
我是发哥哈19 小时前
主流AI视频生成方案商用化能力横向评测
大数据·人工智能·学习·机器学习·chatgpt·音视频
reasonsummer20 小时前
【教学类-160-10】20260418 AI视频培训-练习011“豆包AI视频《毛毛虫找妈妈》+豆包图片风格:水墨画”
音视频·豆包图片
liulian09161 天前
【Flutter for OpenHarmony第三方库】Flutter for OpenHarmony 音频播放功能适配与实现指南
flutter·华为·音视频·学习方法·harmonyos
互联科技报1 天前
2026年短视频矩阵视频混剪软件工具商业分析报告
线性代数·矩阵·音视频
philsonzhao1 天前
ffmpeg如何剪辑视频,并保留字幕
音视频
byte轻骑兵1 天前
【LE Audio】BASS精讲[3]: 从服务声明到行为逻辑 解锁广播音频接收核心
音视频·实时音视频·le audio·低功耗音频·蓝牙通话
ai产品老杨1 天前
深度解析:基于异构计算的工业级AI视频中台架构,如何实现GB28181/RTSP跨平台部署与源码交付?
人工智能·架构·音视频
枫叶丹41 天前
【HarmonyOS 6.0】AVCodec Kit 视频解码器平滑停用机制详解
开发语言·华为·音视频·harmonyos
ai产品老杨1 天前
告别重复造轮子:深度解析支持源码交付的 AI 视频平台架构,实现 X86/ARM 与 GPU/NPU 异构算力融合
人工智能·架构·音视频