Return arguments from function calling with OpenAI API when streaming?

**题意:**在使用OpenAI API进行流式传输时,如何返回函数调用的参数?

问题背景:

I've made a simple OpenAI API example with function calling. I'm only using function calling to format the response, I'm not calling multiple functions or any external APIs.

我做了一个简单的带有函数调用的OpenAI API示例。我只使用函数调用来格式化响应,并没有调用多个函数或任何外部API。

When I don't stream the response I can return the function arguments, which is the data that I need.

当我不进行流式传输响应时,我可以返回函数参数,而这些正是我需要的数据。

In my NextJS route handler:

export async function POST(request: Request) {
  try {
    const openai = new OpenAI({
      apiKey: process.env["OPENAI_API_KEY"],
    });
    const response = await openai.chat.completions.create({
      model: "gpt-4",
      // stream: true,
      messages: [
        {
          role: "user",
          content: "Give me 5 questions and answers for a pub quiz",
        },
      ],
      tools: [
        {
          type: "function",
          function: {
            name: "get_questions_and_answers",
            description: "Get questions and answers",
            parameters: simpleJsonSchema,
          },
        },
      ],
      tool_choice: {
        type: "function",
        function: { name: "get_questions_and_answers" },
      },
    });
    return Response.json(
       JSON.parse(
         response.choices[0].message.tool_calls?.[0].function.arguments || "",
       ),
    );
  } catch (serverError) {
    console.error({ serverError });
    throw new Error();
  }
}

simpleJsonSchema.json:

{
  "type": "object",
  "properties": {
    "getQuestions": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "Question": {"type": "string"},
          "Answer": {"type": "string"}
        },
        "required": ["Question", "Answer"]
      }
    }
  },
  "required": ["getQuestions"]
}

Response from API: API的响应信息:

{"getQuestions":[{"Question":"What is the capital of Australia?","Answer":"Canberra"},{"Question":"Who wrote 'To Kill a Mockingbird'?","Answer":"Harper Lee"},{"Question":"What is the highest peak in the world?","Answer":"Mount Everest"},{"Question":"Who is known as the 'Father of Computers'?","Answer":"Charles Babbage"},{"Question":"What is the largest ocean in the world?","Answer":"Pacific Ocean"}]}

This is fine when developing locally, however when deployed to Vercel the request sometimes times out. I've tried to add streaming as this is the recommended solution:

在本地开发时,这一切都没问题,然而当部署到Vercel时,请求有时会超时。我已经尝试添加流式传输,因为这是推荐的解决方案:

const response = await openai.chat.completions.create({
  model: "gpt-4",
  stream: true,
  messages: [
    {
      role: "user",
      content: "Give me 5 questions and answers for a pub quiz",
    },
  ],
  tools: [
    {
      type: "function",
      function: {
        name: "get_questions_and_answers",
        description: "Get questions and answers",
        parameters: simpleJsonSchema,
      },
    },
  ],
  tool_choice: {
    type: "function",
    function: { name: "get_questions_and_answers" },
  },
});

const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);

However now the response has a lot of unnecessary data. And when I try to JSON.parse on the client I get errors.

然而,现在响应中包含了很多不必要的数据。当我尝试在客户端使用`JSON.parse`时,会出现错误。

Response from API: API响应

{"tool_calls":[ {"id": "call_IhxvzkZ5EsmZpHc6tOznTmzb", "type": "function", "function": {"name": "get_questions_and_answers", "arguments": "{\n  \"getQuestions\": [\n    {\n      \"Question\": \"Question 1\",\n      \"Answer\": \"Answer 1\"\n    },\n    {\n      \"Question\": \"Question 2\",\n      \"Answer\": \"Answer 2\"\n    },\n    {\n      \"Question\": \"Question 3\",\n      \"Answer\": \"Answer 3\"\n    },\n    {\n      \"Question\": \"Question 4\",\n      \"Answer\": \"Answer 4\"\n    },\n    {\n      \"Question\": \"Question 5\",\n      \"Answer\": \"Answer 5\"\n    }\n  ]\n}"}}

As far as I can see the docs only cover using useChat but I have some particular requirements so I need to handle the fetching and form state myself:

据我所见,文档只涵盖了使用`useChat`的方法,但由于我有一些特殊要求,所以需要自己处理数据获取和表单状态。

https://sdk.vercel.ai/docs/api-reference/use-chat

Why am I getting invalid JSON?

为什么我会收到无效的JSON?

Here is a repository which reproduces the issue:

这是一个重现该问题的代码库:

https://github.com/jameschetwood/openai-function-calling

问题解决:

this is the response you are getting:

这是你得到的响应:

{"tool_calls":[ {"id": "call_HRxqlP3yzeHsoN43tMyZjMlr", "type": "function", "function": {"name": "get_questions_and_answers", "arguments": "{\n  \"getQuestions\": [\n    {\n      \"Question\": \"What is the capital city of France?\",\n      \"Answer\": \"Paris\"\n    },\n    {\n      \"Question\": \"Who painted the Mona Lisa?\",\n      \"Answer\": \"Leonardo da Vinci\"\n    },\n    {\n      \"Question\": \"What is the largest planet in our solar system?\",\n      \"Answer\": \"Jupiter\"\n    },\n    {\n      \"Question\": \"What is the national flower of England?\",\n      \"Answer\": \"Rose\"\n    },\n    {\n      \"Question\": \"Which country is famous for its tulips?\",\n      \"Answer\": \"Netherlands\"\n    }\n  ]\n}"}}

I used JSON Editor Online: edit JSON, format JSON, query JSON to auto correct the json and it just adds "]}". for some reason openai is not sending correct json response. you have to add it

我使用了JSON Editor Online:编辑JSON、格式化JSON、查询JSON来自动修正JSON,它只是添加了"]}"。由于某种原因,OpenAI没有发送正确的JSON响应,你需要手动添加它。

accumulatedText += "]}";

then response works:

然后响应就会生效:

this is too specific error. if openai updates its response api, it might send the json data correctly. so a better approach would be parsing in try/catch

这是一个过于特定的错误。如果OpenAI更新了其响应API,它可能会正确发送JSON数据。因此,更好的方法是在解析时使用`try/catch`。

try {
      const parsed = JSON.parse(accumulatedText);
      console.log({ parsed });
    } catch (error) {
      // you should error for each specific case
      accumulatedText += "]}";
      console.log("correct accumulatedText in catch block", accumulatedText);
    }
相关推荐
hunteritself3 小时前
AI Weekly『12月16-22日』:OpenAI公布o3,谷歌发布首个推理模型,GitHub Copilot免费版上线!
人工智能·gpt·chatgpt·github·openai·copilot
落魄实习生5 小时前
AI应用-本地模型实现AI生成PPT(简易版)
python·ai·vue·ppt
ibrahim7 小时前
Llama 3.2 900亿参数视觉多模态大模型本地部署及案例展示
ai·大模型·llama·提示词
nbsaas-boot9 小时前
探索 JSON 数据在关系型数据库中的应用:MySQL 与 SQL Server 的对比
数据库·mysql·json
新智元11 小时前
LeCun 八年前神预言,大模型路线再颠覆?OpenAI 宣告:强化学习取得稳定性突破
人工智能·openai
疯一样的码农11 小时前
Jackson 的@JsonRawValue
json
探索云原生11 小时前
在 K8S 中创建 Pod 是如何使用到 GPU 的: nvidia device plugin 源码分析
ai·云原生·kubernetes·go·gpu
SimonLiu00914 小时前
[AI]30分钟用cursor开发一个chrome插件
chrome·ai·ai编程
Web打印14 小时前
web打印插件 HttpPrinter 使用半年评测
javascript·json·firefox·jquery·html5
伯牙碎琴15 小时前
智能体实战(需求分析助手)二、需求分析助手第一版实现(支持需求提取、整理、痛点分析、需求分类、优先级分析、需求文档生成等功能)
ai·大模型·agent·需求分析·智能体