OpenAI GPT-3 API: What is the difference between davinci and text-davinci-003?

题意:OpenAI GPT-3 API:davinci 和 text-davinci-003 有什么区别

问题背景:

I'm testing the different models for OpenAI, and I noticed that not all of them are developed or trained enough to give a reliable response.

我正在测试 OpenAI 的不同模型,我发现并不是所有模型都足够完善或训练充分,无法提供可靠的回应

The models I tested are the following:

我测试的模型如下:

model_engine = "text-davinci-003"
model_engine = "davinci" 
model_engine = "curie" 
model_engine = "babbage" 
model_engine = "ada" 

I need to understand what the difference is between davinci and text-davinci-003, and how to improve the responses to match that response when you use ChatGPT.

我需要了解 davincitext-davinci-003 之间的区别,以及如何改进响应,使其与使用 ChatGPT 时的响应相匹配

问题解决:

TL;DR 摘要

  • text-davinci-003 is the newer and more capable model than davinci

text-davinci-003 是比 davinci 更新且更强大的模型

  • text-davinci-003 supports a longer context window than davinci

text-davinci-003 支持比 davinci 更长的上下文窗口

  • text-davinci-003 was trained on a more recent dataset than davinci

text-davinci-003 使用比 davinci 更新的数据集进行训练

  • text-davinci-003 is cheaper than davinci

text-davinci-003davinci 更便宜

  • text-davinci-003 is not available for fine-tuning, while davinci is

text-davinci-003 不支持微调,而 davinci 支持

Capabilities 能力

As stated in the official OpenAI article: 正如 OpenAI 官方文章中所述

While both davinci and text-davinci-003 are powerful models, they differ in a few key ways.

text-davinci-003 is the newer and more capable model, designed specifically for instruction-following tasks. This enables it to respond concisely and more accurately - even in zero-shot scenarios, i.e. without the need for any examples given in the prompt. davinci, on the other hand, can be fine-tuned on a specific task, which can make it very effective if you have access to at least a few hundred training examples.

Additionally, text-davinci-003 supports a longer context window (max prompt+completion length) than davinci - 4097 tokens compared to davinci's 2049.

Finally, text-davinci-003 was trained on a more recent dataset, containing data up to June 2021. These updates, along with its support for Inserting text, make text-davinci-003 a particularly versatile and powerful model we recommend for most use-cases.

Use text-davinci-003 because the other models you mentioned in your question are less capable.

使用 text-davinci-003,因为您在问题中提到的其他模型能力较弱

If you buy a ChatGPT Plus subscription, you can also use gpt-3.5-turbo or gpt-4. So, to get similar responses as you get from ChatGPT, it depends on whether you are subscribed or not. For sure, gpt-3.5-turbo and gpt-4 are even more capable than text-davinci-003.

如果您购买 ChatGPT Plus 订阅,您还可以使用 gpt-3.5-turbogpt-4。因此,获得与 ChatGPT 相似的响应取决于您是否订阅。可以肯定的是,gpt-3.5-turbogpt-4 的能力甚至比 text-davinci-003 更强

Costs 成本

text-davinci-003 is cheaper than davinci, as stated on the official OpenAI website:

正如 OpenAI 官方网站所述,text-davinci-003davinci 更便宜

MODEL USAGE
davinci $0.1200 / 1K tokens
text-davinci-003 $0.0200 / 1K tokens

Fine-tuning availability 微调可用性

text-davinci-003 is not available for fine-tuning, while davinci is, as stated in the official OpenAI documentation:

正如 OpenAI 官方文档所述,text-davinci-003 不支持微调,而 davinci 支持

Fine-tuning is currently only available for the following base models: davinci, curie, babbage, and ada. These are the original models that do not have any instruction following training (like text-davinci-003 does for example).

MODEL FINE-TUNING AVAILABILITY TRAINING
davinci Yes $0.0300 / 1K tokens
text-davinci-003 No
相关推荐
探索云原生42 分钟前
在 K8S 中创建 Pod 是如何使用到 GPU 的: nvidia device plugin 源码分析
ai·云原生·kubernetes·go·gpu
SimonLiu0093 小时前
[AI]30分钟用cursor开发一个chrome插件
chrome·ai·ai编程
伯牙碎琴4 小时前
智能体实战(需求分析助手)二、需求分析助手第一版实现(支持需求提取、整理、痛点分析、需求分类、优先级分析、需求文档生成等功能)
ai·大模型·agent·需求分析·智能体
卓琢20 小时前
2024 年 IA 技术大爆发深度解析
深度学习·ai·论文笔记
zaim11 天前
计算机的错误计算(一百八十七)
人工智能·ai·大模型·llm·错误·正弦/sin·误差/error
程序员小灰1 天前
OpenAI正式发布o3:通往AGI的路上,已经没有了任何阻碍
人工智能·aigc·openai
凳子花❀1 天前
市场常见AI芯片总结
ai·gpu
豌豆花下猫2 天前
Python 潮流周刊#82:美国 CIA 如何使用 Python?(摘要)
后端·python·ai
爱学习的小道长2 天前
Python langchain ReAct 使用范例
python·ai·langchain
zaim12 天前
计算机的错误计算(一百八十六)
人工智能·python·ai·大模型·llm·误差·decimal