Return arguments from function calling with OpenAI API when streaming?

**题意:**在使用OpenAI API进行流式传输时,如何返回函数调用的参数?

问题背景:

I've made a simple OpenAI API example with function calling. I'm only using function calling to format the response, I'm not calling multiple functions or any external APIs.

我做了一个简单的带有函数调用的OpenAI API示例。我只使用函数调用来格式化响应,并没有调用多个函数或任何外部API。

When I don't stream the response I can return the function arguments, which is the data that I need.

当我不进行流式传输响应时,我可以返回函数参数,而这些正是我需要的数据。

In my NextJS route handler:

复制代码
export async function POST(request: Request) {
  try {
    const openai = new OpenAI({
      apiKey: process.env["OPENAI_API_KEY"],
    });
    const response = await openai.chat.completions.create({
      model: "gpt-4",
      // stream: true,
      messages: [
        {
          role: "user",
          content: "Give me 5 questions and answers for a pub quiz",
        },
      ],
      tools: [
        {
          type: "function",
          function: {
            name: "get_questions_and_answers",
            description: "Get questions and answers",
            parameters: simpleJsonSchema,
          },
        },
      ],
      tool_choice: {
        type: "function",
        function: { name: "get_questions_and_answers" },
      },
    });
    return Response.json(
       JSON.parse(
         response.choices[0].message.tool_calls?.[0].function.arguments || "",
       ),
    );
  } catch (serverError) {
    console.error({ serverError });
    throw new Error();
  }
}

simpleJsonSchema.json:

复制代码
{
  "type": "object",
  "properties": {
    "getQuestions": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "Question": {"type": "string"},
          "Answer": {"type": "string"}
        },
        "required": ["Question", "Answer"]
      }
    }
  },
  "required": ["getQuestions"]
}

Response from API: API的响应信息:

复制代码
{"getQuestions":[{"Question":"What is the capital of Australia?","Answer":"Canberra"},{"Question":"Who wrote 'To Kill a Mockingbird'?","Answer":"Harper Lee"},{"Question":"What is the highest peak in the world?","Answer":"Mount Everest"},{"Question":"Who is known as the 'Father of Computers'?","Answer":"Charles Babbage"},{"Question":"What is the largest ocean in the world?","Answer":"Pacific Ocean"}]}

This is fine when developing locally, however when deployed to Vercel the request sometimes times out. I've tried to add streaming as this is the recommended solution:

在本地开发时,这一切都没问题,然而当部署到Vercel时,请求有时会超时。我已经尝试添加流式传输,因为这是推荐的解决方案:

复制代码
const response = await openai.chat.completions.create({
  model: "gpt-4",
  stream: true,
  messages: [
    {
      role: "user",
      content: "Give me 5 questions and answers for a pub quiz",
    },
  ],
  tools: [
    {
      type: "function",
      function: {
        name: "get_questions_and_answers",
        description: "Get questions and answers",
        parameters: simpleJsonSchema,
      },
    },
  ],
  tool_choice: {
    type: "function",
    function: { name: "get_questions_and_answers" },
  },
});

const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);

However now the response has a lot of unnecessary data. And when I try to JSON.parse on the client I get errors.

然而,现在响应中包含了很多不必要的数据。当我尝试在客户端使用`JSON.parse`时,会出现错误。

Response from API: API响应

复制代码
{"tool_calls":[ {"id": "call_IhxvzkZ5EsmZpHc6tOznTmzb", "type": "function", "function": {"name": "get_questions_and_answers", "arguments": "{\n  \"getQuestions\": [\n    {\n      \"Question\": \"Question 1\",\n      \"Answer\": \"Answer 1\"\n    },\n    {\n      \"Question\": \"Question 2\",\n      \"Answer\": \"Answer 2\"\n    },\n    {\n      \"Question\": \"Question 3\",\n      \"Answer\": \"Answer 3\"\n    },\n    {\n      \"Question\": \"Question 4\",\n      \"Answer\": \"Answer 4\"\n    },\n    {\n      \"Question\": \"Question 5\",\n      \"Answer\": \"Answer 5\"\n    }\n  ]\n}"}}

As far as I can see the docs only cover using useChat but I have some particular requirements so I need to handle the fetching and form state myself:

据我所见,文档只涵盖了使用`useChat`的方法,但由于我有一些特殊要求,所以需要自己处理数据获取和表单状态。

https://sdk.vercel.ai/docs/api-reference/use-chat

Why am I getting invalid JSON?

为什么我会收到无效的JSON?

Here is a repository which reproduces the issue:

这是一个重现该问题的代码库:

https://github.com/jameschetwood/openai-function-calling

问题解决:

this is the response you are getting:

这是你得到的响应:

复制代码
{"tool_calls":[ {"id": "call_HRxqlP3yzeHsoN43tMyZjMlr", "type": "function", "function": {"name": "get_questions_and_answers", "arguments": "{\n  \"getQuestions\": [\n    {\n      \"Question\": \"What is the capital city of France?\",\n      \"Answer\": \"Paris\"\n    },\n    {\n      \"Question\": \"Who painted the Mona Lisa?\",\n      \"Answer\": \"Leonardo da Vinci\"\n    },\n    {\n      \"Question\": \"What is the largest planet in our solar system?\",\n      \"Answer\": \"Jupiter\"\n    },\n    {\n      \"Question\": \"What is the national flower of England?\",\n      \"Answer\": \"Rose\"\n    },\n    {\n      \"Question\": \"Which country is famous for its tulips?\",\n      \"Answer\": \"Netherlands\"\n    }\n  ]\n}"}}

I used JSON Editor Online: edit JSON, format JSON, query JSON to auto correct the json and it just adds "]}". for some reason openai is not sending correct json response. you have to add it

我使用了JSON Editor Online:编辑JSON、格式化JSON、查询JSON来自动修正JSON,它只是添加了"]}"。由于某种原因,OpenAI没有发送正确的JSON响应,你需要手动添加它。

复制代码
accumulatedText += "]}";

then response works:

然后响应就会生效:

this is too specific error. if openai updates its response api, it might send the json data correctly. so a better approach would be parsing in try/catch

这是一个过于特定的错误。如果OpenAI更新了其响应API,它可能会正确发送JSON数据。因此,更好的方法是在解析时使用`try/catch`。

复制代码
try {
      const parsed = JSON.parse(accumulatedText);
      console.log({ parsed });
    } catch (error) {
      // you should error for each specific case
      accumulatedText += "]}";
      console.log("correct accumulatedText in catch block", accumulatedText);
    }
相关推荐
得物技术2 小时前
从 JSON 字符串到 Java 对象:Fastjson 1.2.83 全程解析|得物技术
java·后端·json
suyong_yq2 小时前
使用Vela编译器开发Ethos-U NPU流程导引
ai·嵌入式·arm·npu·ethos-u
机器之心4 小时前
具身智能迎来ImageNet时刻:RoboChallenge开放首个大规模真机基准测试集
人工智能·openai
jasonj334 小时前
大模型与Dify实战:汽车研发工程师的AI工具
ai
CoderJia程序员甲4 小时前
GitHub 热榜项目 - 日榜(2025-10-15)
ai·开源·大模型·github·ai教程
AlfredZhao6 小时前
比 "26ai" 更震撼的,是 Oracle AI 向量搜索改写的生命答案
ai·vector·search·26ai
come1123413 小时前
Chrome MCP Server 的安装与使用
ai
强哥之神15 小时前
浅谈目前主流的LLM软件技术栈:Kubernetes + Ray + PyTorch + vLLM 的协同架构
人工智能·语言模型·自然语言处理·transformer·openai·ray
CoderJia程序员甲15 小时前
GitHub 热榜项目 - 日榜(2025-10-11)
ai·开源·github·ai编程·github热榜
CoderJia程序员甲19 小时前
GitHub 热榜项目 - 日榜(2025-10-14)
ai·开源·大模型·github·ai教程