How to handle the response OpenAI Text-To-Speech API in Node.js?

**题意:**如何在 Node.js 中处理 OpenAI 文字转语音 API 的响应?

问题背景:

Here's my code: 以下是我的代码:

TypeScript 复制代码
const speechUrl = 'https://api.openai.com/v1/audio/speech';
    
const headers = {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`
};

async function voiceGenerator(text) {
    console.log('voiceGenerator is triggered');
    console.log('text: ', text);
    const body = {
        "model": "tts-1",
        "input": text,
        "voice": "alloy",
        "response_format": "mp3",
        "speed": 0.9
    };

    return axios.post(speechUrl, body, { headers: headers })
    .then((res) => {
        if (res.status === 200 || res.status === 204) {
            // res.data = Buffer.from(res.data, 'binary');
            return res.data;
        } else {
            console.log('res: ', res);
            throw res;
        }
    })
    .catch((err) => {
        console.error('OpenAI API failed, error: ', err);
        throw err;
    });
}

And my question is that how do I convert the thing I received into mp3 buffer and store it? I don't know what exactly am I receiving. All I know is that the Content-Type is audio/mpeg and Transfer-Encoding is chunked.

我的问题是,如何将我收到的内容转换为 mp3 缓冲区并存储?我不知道我收到的到底是什么。我只知道 `Content-Type` 是 `audio/mpeg`,`Transfer-Encoding` 是分块传输(chunked)。

I can't use openai SDK because it keep throws error no matter when. I had to use API call here. Postman can just get the file by calling it btw.

我不能使用 OpenAI SDK,因为无论何时使用都会抛出错误。我不得不在这里使用 API 调用。顺便提一下,Postman 可以通过调用直接获取文件。

问题解决:

async function voiceGenerator(text) {
    console.log('voiceGenerator is triggered');
    console.log('text: ', text);
    const body = {
        "model": "tts-1",
        "input": text,
        "voice": "alloy",
        "response_format": "mp3",
        "speed": 0.9
    };

    return axios.post(speechUrl, body, { headers: headers, responseType: 'arraybuffer' })
    .then((res) => {
        if (res.status === 200 || res.status === 204) {
            const buffer = Buffer.from(res.data);

            return buffer;
        } else {
            console.log('res: ', res);
            throw res;
        }
    })
    .catch((err) => {
        console.error('OpenAI API failed, error: ', err);
        throw err;
    });
}

This is the solution I reached. It turns out that by adding "responseType": "arraybuffer", the API would return the buffer array that you can convert into buffer later on.

这是我得到的解决方案。结果发现,通过添加 `"responseType": "arraybuffer"`,API 会返回一个缓冲区数组,之后你可以将其转换为缓冲区。

相关推荐
俞兆鹏19 分钟前
AI学习指南深度学习篇-RMSprop的Python实践
ai
图灵苹果38 分钟前
【个人博客hexo版】hexo安装时会出现的一些问题
前端·前端框架·npm·node.js
DA树聚6 小时前
大语言模型之ICL(上下文学习) - In-Context Learning Creates Task Vectors
人工智能·学习·程序人生·ai·语言模型·自然语言处理·easyui
新知图书7 小时前
Node.js运行环境搭建
node.js
南辞w8 小时前
Webpack和Vite的区别
前端·webpack·node.js
等你许久_孟然10 小时前
【webpack4系列】webpack构建速度和体积优化策略(五)
前端·webpack·node.js
营赢盈英10 小时前
Using OpenAI API from Firebase Cloud Functions in flutter app
ai·node.js·openai·googlecloud·firebase
刘懿儇14 小时前
NVM(node.js版本工具)的使用
node.js
沙漏无语17 小时前
npm 设置国内镜像源
前端·npm·node.js
YINWA AI18 小时前
胤娲科技:解锁AI奥秘——产品经理的智能进化之旅
人工智能·科技·ai