开源的语音合成大模型-Cosyvoice使用介绍

1 模型概览

CosyVoice 是由阿里巴巴达摩院通义实验室开发的新一代生成式语音合成大模型系列,其核心目标是通过大模型技术深度融合文本理解与语音生成,实现高度拟人化的语音合成体验。该系列包含初代 CosyVoice 及其升级版 CosyVoice 2.0,两者在技术架构、性能和应用场景上均有显著差异。关键突破包括:

  • MOS评分达5.53,接近真人发音水平;

  • 首包延迟低至150ms,较传统方案降低60%;

  • 支持多种语言及方言(中/英/日/韩/粤语/四川话等),支持中英混合语句自然合成;

  • 集成情感控制环境音效插入 (如[laughter])等细粒度生成能力。

2 不同应用场景的模型功能

|-------------------------|-------------------|-----------------------|-----------------------------------------|
| 模型名称 | 核心功能 | 使用场景 | 技术特点 |
| CosyVoice-300M | 零样本音色克隆、跨语言生成 | 个性化语音克隆、跨语种配音(如中文→英文) | 仅需 3s 参考音频;支持 5 种语言;无预置音色,需用户提供样本 |
| CosyVoice-300M-Instruct | 细粒度情感/韵律控制(富文本指令) | 情感配音(如广告、有声书)、语气细节调整 | 支持自然语言指令(如"欢快语气")及富文本标签(如 <laugh>)159 |
| CosyVoice-300M-SFT | 预置音色合成(无需样本) | 快速生成固定音色(如教育课件、导航语音) | 内置 7 种预训练音色(中/英/日/韩/粤男女声);无需克隆样本 |
| CosyVoice2-0.5B | 多语言流式语音合成、低延迟实时响应 | 直播、实时对话客服、双向语音交互 | 0.5B 参数量;支持双向流式合成(首包延迟 ≤150ms);多种语言支持 |

用户可以根据自己不同的业务需求选择不同的模型

3 不同场景的demo

3.1 CosyVoice-300M

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


#cosyvoice = CosyVoice('/models/iic/CosyVoice-300M', load_jit=False, load_trt=False, fp16=False)

def inference_zero_shot_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot(tts_text, '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/zero_shot3_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# cross_lingual usage
def inference_cross_lingual_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# vc usage
def inference_vc_300M(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    source_speech_16k = load_wav('asset/cross_lingual_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/test_data/vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)


if __name__ == '__main__':
    cosyvoice = CosyVoice('hub/models/iic/CosyVoice-300M(模型地址)') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
    inference_zero_shot_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')
    inference_cross_lingual_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')

3.2 CosyVoice-300M-Instruct

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


def inference_instruct(cosyvoice,tts_text):
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M-Instruct')
    # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
    for i, j in enumerate(cosyvoice.inference_instruct(tts_text, '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
        torchaudio.save('asset/cosyvoice-instruct/instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

if __name__ == '__main__':
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
    #nference_zero_shot_300M(cosyvoice,'今天是个好日子,我们一起去旅游吧')
    inference_instruct(cosyvoice,'在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。')

3.3 CosyVoice-300M-SFT

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


# sft usage
def inference_sft(cosyvoice,tts_text):
    print(cosyvoice.list_available_spks())
    # change stream=True for chunk stream inference
    for i, j in enumerate(cosyvoice.inference_sft(tts_text, '中文女', stream=False)):
        torchaudio.save('asset/cosyvoice-sft/sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

if __name__ == '__main__':
    cosyvoice = CosyVoice('/hub/models/iic/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
    inference_sft(cosyvoice,'今天是个好日子,我们一起去旅游吧')

3.4 CosyVoice2-0.5B

python 复制代码
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio


# zero_shot usage
def inference_zero_shot_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_zero_shot(tts_text, '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
def inference_cross_lingual_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# instruct usage
def inference_instruct2_05B(cosyvoice,tts_text):
    prompt_speech_16k = load_wav('asset/zero_shot_prompt.wav', 16000)
    for i, j in enumerate(cosyvoice.inference_instruct2(tts_text, '用四川话说这句话', prompt_speech_16k, stream=False)):
        torchaudio.save('asset/CosyVoice2-05B/instruct1_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)


if __name__ == '__main__':
    cosyvoice = CosyVoice2('/hub/models/iic/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
    tts_text = '收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。'
    #inference_zero_shot_05B(cosyvoice,tts_text)
    #inference_cross_lingual_05B(cosyvoice,tts_text)
    inference_instruct2_05B(cosyvoice,tts_text)

以上为简单的demo,实测效果很好了,可以使用CosyVoice框架提供的http接口,也可以自己使用fastapi定制化开发。

CosyVoice代码仓库地址:https://github.com/FunAudioLLM/CosyVoice.git

CosyVoice2-0.5B模型魔塔地址:CosyVoice语音生成大模型2.0-0.5B

推荐一个好用的JSON工具:JSON在线

相关推荐
MYX_3096 分钟前
第七章 完整的模型训练
pytorch·python·深度学习·学习
golang学习记8 分钟前
VSCode Copilot 编码智能体实战指南:让 AI 自主开发,你只负责 Review!
人工智能
渡我白衣11 分钟前
深度学习进阶(八)——AI 操作系统的雏形:AgentOS、Devin 与多智能体协作
人工智能·深度学习
新子y23 分钟前
【小白笔记】岛屿数量
笔记·python
万岳软件开发小城24 分钟前
AI数字人系统源码+AI数字人小程序开发:2025年热门AI项目
人工智能·开源·软件开发·app开发·ai数字人小程序·ai数字人系统源码
CLubiy28 分钟前
【研究生随笔】Pytorch中的线性代数
pytorch·python·深度学习·线性代数·机器学习
reasonsummer31 分钟前
【办公类-115-02】20251018信息员每周通讯上传之文字稿整理(PDF转docx没有成功)
python·pdf
xiangzhihong832 分钟前
Spring Boot集成SSE实现AI对话的流式响应
人工智能·spring boot
羊羊小栈33 分钟前
基于知识图谱(Neo4j)和大语言模型(LLM)的图检索增强(GraphRAG)的台风灾害知识问答系统(vue+flask+AI算法)
人工智能·毕业设计·知识图谱·创业创新·neo4j·毕设·大作业
+wacyltd大模型备案算法备案38 分钟前
【大模型备案】全国有439个大模型通过生成式人工智能大模型备案!
人工智能