如何在OpenAI的模型上做微调

OpenAI 已经支持在它的模型上进行微调,包括 gpt-3.5-turbo,gpt-4。在这里,把如何微调 OpenAI 的 GPT-3.5-turbo-1106的过程记录下来。

一、准备数据集

微调任何人工智能模型的第一步是准备训练数据集。对于我们的示例,我们将使用名为 的 CSV 文件。

此例子中,translate.csv 其中包含与一些游戏领域的中文翻译内容。 origin为原文,target为译文

然后,我们将此 CSV 文件转换为更适合训练 AI 模型的 JSONL(JSON Lines)格式。下面的 Python 脚本读取 CSV 文件并将其转换为 JSONL 格式:

python 复制代码
import json  

import pandas as pd  

DEFAULT_SYSTEM_PROMPT = '把在>>>和<<<中的内容翻译成[[和]]中的语言 '  

def get_example(language, origin, target):  

    return {  

        "messages": [  

            {"role": "system", "content": DEFAULT_SYSTEM_PROMPT},  

            {"role": "user", "content": f'[[{language}]], >>>{origin}<<<'},  

            {"role": "assistant", "content": target},  

        ]  

    }  

if __name__ == "__main__":  

    df = pd.read_csv("translate.csv")  

    with open("train.jsonl", "w", encoding="utf8") as f:  

        for i, row in list(df.iterrows()):  

            origin = row["origin"]  

            target = row["target"]  

            print(origin)

            example = get_example('en', origin, target)  

            example_str = json.dumps(example,ensure_ascii=False)  

            f.write(example_str + "\n")

  

生成的jsonl中的内容类似

具体每一个行类似

二、运行微调

一旦我们准备好训练数据集,我们就可以使用 OpenAI 的 API 继续微调我们的模型:

1、安装openai 包

注意:本文的例子需要 openai 版本 > 1.1.0 的,如果太老旧了,请更新

复制代码
pip install openai

2、执行微调命令

python 复制代码
  


import json

from time import sleep

from openai import OpenAI


import os

os.environ['OPENAI_API_KEY']="sk-7Vl54m90xxxxxxxxxxxxxxxxxxxxxxx"

client = OpenAI()

#client = OpenAI(api_key="sk-7Vl54m90xxxxxxxxxxxxx")    #或者通过参数传入类似

 
model_name = 'gpt-3.5-turbo-1106'

#监控任务完成状态

def wait_untill_done(job_id):

    events = {}

    while True:

        response = client.fine_tuning.jobs.list_events(fine_tuning_job_id=job_id, limit=10)

        print('fine tuning, waiting for ...')

        # collect all events

        for event in response.data:

            if "data" in event and event.data:

              print(event.data)

              events[event.data["step"]] = event.data["train_loss"]

        messages = [it.message for it in response.data]

        for m in messages:

            if m.startswith("New fine-tuned model created: "):

                return m.split("created: ")[1], events

        sleep(10)

if __name__ == "__main__":

    response = client.files.create(file=open("train.jsonl", "rb"), purpose="fine-tune")

    uploaded_id = response.id

    print('uploaded_id=', uploaded_id )

    print("Dataset is uploaded")

    print("Sleep 10 seconds...")

    sleep(10)  # wait until dataset would be prepared

   

    response = client.fine_tuning.jobs.create(training_file=uploaded_id,model=model_name)

    ft_job_id = response.id

    print("Fine-tune job is started, job_id = ",ft_job_id)

    new_model_name, events = wait_untill_done(ft_job_id)

    with open("new_model_name.txt", "w") as fp:

        fp.write(new_model_name)

    print("Fine-tune job is success, new model name = ",new_model_name)

执行成功后,可以看到类似这样的内容

erlang 复制代码
uploaded_id= file-3KzFOCxKqfZTZe89m1I40wgA

Dataset is uploaded

Sleep 30 seconds...

Fine-tune job is started, job_id = ftjob-PiqWqQ6BDPbB9hCELN2B6MbL

fine tuning, waiting for ...

fine tuning, waiting for ...

fine tuning, waiting for ...

fine tuning, waiting for ...

fine tuning, waiting for ...

fine tuning, waiting for ...

......

Fine-tune job is success, new model name = ft:gpt-3.5-turbo-1106:personal::8LqhuNgA

在执行的过程中,也可以上 platform上platform.openai.com/finetune,可以看到类似这样的内容。

3、记住微调后的model名

成功之后,可以看到输出 Fine-tune job is success, new model name = ft:gpt-3.5-turbo-1106:personal::8LqhuNgA,记住这个新的model name。

如果不小心关掉了,也可以在 platform上查看。如

三、使用微调好的模型

成功微调我们的模型后,我们现在可以使用它根据用户输入生成响应:

python 复制代码
from openai import OpenAI

import os

os.environ['OPENAI_API_KEY']="sk-7Vl54m90xxxxxxxxxxxxxxxxxxxxxxx"

client = OpenAI()

#client = OpenAI(api_key="sk-7Vl54m90xxxxxxxxxxxxx")    #或者通过参数传入类似


response = client.chat.completions.create(

  model="ft:gpt-3.5-turbo-1106:personal::8LqhuNgA", ##此处为上面微调好的新model

  messages=[

    {"role": "system", "content": "你是一个语言专家,把在>>>和<<<中的内容翻译成[[和]]中的语言 "},

    {"role": "user", "content": "[[en]],>>>选择目标友方英雄开始施法,一段时间后传送至目标位置 施法期间右方英雄获得护盾值,并和{Hero_149}获得伤害减免 传送完成后{Hero_149}增加移动速度,自身周围一定范围内右方英雄获得物理防御和魔法防御<<<"}

  ]

)

print(response.choices[0].message.content)

得到这样的结果

css 复制代码
selects a teammate and starts channeling, then teleports to the target after a while. While channeling, he grants a shield to heroes to the right and damage reduction to himself. After teleporting, his Movement Speed is increased, and heroes in range to the right gain Physical Defense and Magical Defense.
相关推荐
FinClip2 天前
凡泰极客FinClip荣获2025中国企业IT大奖!AI+超级APP重塑企业AI服务
前端·架构·openai
Zeeland2 天前
LangChain——如何选择合适的多智能体架构
ai·langchain·openai·ai agent
石云升3 天前
谁来负责企业落地AI?
aigc·openai·ai编程
草帽lufei3 天前
LangChain 框架基础知识和核心组件Prompts,Chains
langchain·openai·gemini
就这个丶调调4 天前
Python中使用OpenAI实现AI问答:流式返回、记忆存储与工具调用详解
python·openai·流式响应·工具调用·ai问答·记忆存储
SoRound4 天前
【Shopee Games AI 模型使用经验】年度总结之 ------ 识别人脸特征,生成动漫形象
python·openai
草帽lufei5 天前
Prompt Engineering基础实践:角色设定/约束条件等技巧
openai·agent
狼爷5 天前
一文看懂 AI 世界里的新黑话Skills、MCP、Projects、Prompts
人工智能·openai·ai编程
黄林晴5 天前
Anthropic 发布 Cowork:让 AI 成为你的「虚拟同事」
openai·ai编程·vibecoding
杜余生5 天前
我的 Claude Code 安装使用体验分享
openai