开源的语音合成大模型-Cosyvoice使用介绍

1 模型概览

CosyVoice 是由阿里巴巴达摩院通义实验室开发的新一代生成式语音合成大模型系列,其核心目标是通过大模型技术深度融合文本理解与语音生成,实现高度拟人化的语音合成体验。该系列包含初代 CosyVoice 及其升级版 CosyVoice 2.0,两者在技术架构、性能和应用场景上均有显著差异。关键突破包括:

  • MOS评分达5.53,接近真人发音水平;

  • 首包延迟低至150ms,较传统方案降低60%;

  • 支持多种语言及方言(中/英/日/韩/粤语/四川话等),支持中英混合语句自然合成;

  • 集成情感控制环境音效插入 (如[laughter])等细粒度生成能力。

2 不同应用场景的模型功能

|-------------------------|-------------------|-----------------------|-----------------------------------------|
| 模型名称 | 核心功能 | 使用场景 | 技术特点 |
| CosyVoice-300M | 零样本音色克隆、跨语言生成 | 个性化语音克隆、跨语种配音(如中文→英文) | 仅需 3s 参考音频;支持 5 种语言;无预置音色,需用户提供样本 |
| CosyVoice-300M-Instruct | 细粒度情感/韵律控制(富文本指令) | 情感配音(如广告、有声书)、语气细节调整 | 支持自然语言指令(如"欢快语气")及富文本标签(如 <laugh>)159 |
| CosyVoice-300M-SFT | 预置音色合成(无需样本) | 快速生成固定音色(如教育课件、导航语音) | 内置 7 种预训练音色(中/英/日/韩/粤男女声);无需克隆样本 |
| CosyVoice2-0.5B | 多语言流式语音合成、低延迟实时响应 | 直播、实时对话客服、双向语音交互 | 0.5B 参数量;支持双向流式合成(首包延迟 ≤150ms);多种语言支持 |

用户可以根据自己不同的业务需求选择不同的模型

3 不同场景的demo

3.1 CosyVoice-300M

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


#cosyvoice = CosyVoice('/models/iic/CosyVoice-300M', load_jit=False, load_trt=False, fp16=False)

def inference_zero_shot_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot(tts_text, '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/zero_shot3_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# cross_lingual usage
def inference_cross_lingual_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# vc usage
def inference_vc_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    source_speech_16k = load_wav('asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)


if __name__ == '__main__':
    cosyvoice = CosyVoice('hub/models/iic/CosyVoice-300M(模型地址)') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
    inference_zero_shot_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')
    inference_cross_lingual_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')

3.2 CosyVoice-300M-Instruct

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


def inference_instruct(cosyvoice,tts_text):
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M-Instruct')
    # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
    for i, j in enumerate(cosyvoice.inference_instruct(tts_text, '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
        torchaudio.save('asset/cosyvoice-instruct/instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

if __name__ == '__main__':
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
    #nference_zero_shot_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')
    inference_instruct(cosyvoice,'在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。')

3.3 CosyVoice-300M-SFT

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


# sft usage
def inference_sft(cosyvoice,tts_text):
    print(cosyvoice.list_available_spks())
    # change stream=True for chunk stream inference
    for i, j in enumerate(cosyvoice.inference_sft(tts_text, '中文女', stream=False)):
        torchaudio.save('asset/cosyvoice-sft/sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

if __name__ == '__main__':
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
    inference_sft(cosyvoice,'今天是个好日子,我们一起去旅游吧')

3.4 CosyVoice2-0.5B

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


# zero_shot usage
def inference_zero_shot_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot(tts_text, '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
def inference_cross_lingual_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# instruct usage
def inference_instruct2_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_instruct2(tts_text, '用四川话说这句话', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/instruct1_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)


if __name__ == '__main__':
    cosyvoice = CosyVoice2('/hub/models/iic/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
    tts_text = '收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。'
    #inference_zero_shot_05B(cosyvoice,tts_text)
    #inference_cross_lingual_05B(cosyvoice,tts_text)
    inference_instruct2_05B(cosyvoice,tts_text)

以上为简单的demo,实测效果很好了,可以使用CosyVoice框架提供的http接口,也可以自己使用fastapi定制化开发。

CosyVoice代码仓库地址:https://github.com/FunAudioLLM/CosyVoice.git

CosyVoice2-0.5B模型魔塔地址:CosyVoice语音生成大模型2.0-0.5B

推荐一个好用的JSON工具:JSON在线

相关推荐
张较瘦_2 分钟前
[论文阅读] 人工智能 + 软件工程 | NoCode-bench:评估LLM无代码功能添加能力的新基准
论文阅读·人工智能·软件工程
go54631584655 分钟前
Python点阵字生成与优化:从基础实现到高级渲染技术
开发语言·人工智能·python·深度学习·分类·数据挖掘
猫头虎10 分钟前
2025年02月11日 Go生态洞察:Go 1.24 发布亮点全面剖析
开发语言·后端·python·golang·go·beego·go1.19
Coovally AI模型快速验证16 分钟前
避开算力坑!无人机桥梁检测场景下YOLO模型选型指南
人工智能·深度学习·yolo·计算机视觉·目标跟踪·无人机
仰望天空—永强22 分钟前
PS 2025【七月最新v26.5】PS铺软件安装|最新版|附带安装文件|详细安装说明|附PS插件
开发语言·图像处理·python·图形渲染·photoshop
MediaTea34 分钟前
Python 库手册:xmlrpc.client 与 xmlrpc.server 模块
开发语言·python
悦悦子a啊36 分钟前
Python之--字典
开发语言·python·学习
水军总督39 分钟前
OpenCV+Python
python·opencv·计算机视觉
巫婆理发22241 分钟前
神经网络(第二课第一周)
人工智能·深度学习·神经网络
欧阳小猜1 小时前
OpenCV-图像预处理➁【图像插值方法、边缘填充策略、图像矫正、掩膜应用、水印添加,图像的噪点消除】
人工智能·opencv·计算机视觉