Finetuning with Together AI — The Easiest SFT Tutorial

This streamlined tutorial guides you through the finetuning process with Together AI. While the official tutorial splits the process across different pages, this guide consolidates everything into a single, easy-to-follow resource.

Note:

  • All commands should be entered in the terminal.
  • The minimal training cost is $5, even with just one entry in the training data.

1. Authentication

Start by setting your Together AI API key:

复制代码
export TOGETHER_API_KEY= <your_api>

2. Prepare Your Dataset

Construct your dataset according to the required data format. You can use either Conversational Data or Instruction Data formats.

Conversational Data Example

复制代码
{
  "messages": [
    {"role": "system", "content": "This is a system prompt."},
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
    {"role": "user", "content": "Can you explain machine learning?"},
    {"role": "assistant", "content": "Machine learning is..."}
  ]
}

Instruction Data Example

复制代码
{"prompt": "...", "completion": "..."}
{"prompt": "...", "completion": "..."}

3. Upload Your Dataset and Obtain File ID

Upload your dataset using the following command:

复制代码
together files upload <file_name>

Replace <file_name> with the name of your dataset file (e.g., dataset.jsonl).

Upon successful upload, you will receive a response similar to:

复制代码
{
    "id": "file-123456",
    "object": "file",
    "created_at": 1734574470,
    "purpose": "fine-tune",
    "filename": "filename.jsonl",
    "bytes": 0,
    "line_count": 0,
    "processed": false,
    "FileType": "jsonl"
}

Action: Note down the id (e.g., file-123456) for use in the next steps.

4. Select a Model to Fine-Tune

Fine-tuning ModelsA list of all the models available for fine-tuning.docs.together.ai

Use the name listed under the "Model String for API" column. For example: "meta-llama/Llama-3.3--70B-Instruct-Reference"

5. Create a Finetuning Task

Initiate the finetuning process with the following command:

复制代码
together fine-tuning create - training-file file-123456 - model meta-llama/Llama-3.3–70B-Instruct-Reference

Replace:

  • file-123456 with your actual file ID.
  • meta-llama/Llama-3.3--70B-Instruct-Reference with your chosen model string.

If the submission is successful, you will see a response similar to:

复制代码
Submitting a fine-tuning job with the following parameters:
FinetuneRequest(
    training_file='file-123456',
    validation_file='',
    model='meta-llama/Llama-3.3–70B-Instruct-Reference',
    n_epochs=1,
    learning_rate=1e-05,
    lr_scheduler=FinetuneLRScheduler(lr_scheduler_type='linear', lr_scheduler_args=FinetuneLinearLRSchedulerArgs(min_lr_ratio=0.0)),
    warmup_ratio=0.0,
    max_grad_norm=1.0,
    weight_decay=0.0,
    n_checkpoints=1,
    n_evals=0,
    batch_size=32,
    suffix=None,
    wandb_key=None,
    wandb_base_url=None,
    wandb_project_name=None,
    wandb_name=None,
    training_type=LoRATrainingType(type='Lora', lora_r=8, lora_alpha=16, lora_dropout=0.0, lora_trainable_modules='all-linear'),
    train_on_inputs='auto'
)
Successfully submitted a fine-tuning job ft-c1cce2b0-1a90-47e4-8e84-46f76d2c3dcb at 12/19/2024, 10:16:38

Action: Note down the fine-tuning job ID (e.g., ft-c1cce2b0-1a90-47e4-8e84-46f76d2c3dcb).

6. Monitor and Use Your Fine-Tuned Model

Once the finetuning job is complete, you can use your fine-tuned model as follows:

Example in Python

复制代码
from together import Together

client = Together()

response = client.chat.completions.create(
    model="check your model name in your together AI dashboard",
    messages=[{"role": "user", "content": "Could you give me a like?"}],
)
print(response.choices[0].message.content)
相关推荐
人工智能训练2 小时前
【极速部署】Ubuntu24.04+CUDA13.0 玩转 VLLM 0.15.0:预编译 Wheel 包 GPU 版安装全攻略
运维·前端·人工智能·python·ai编程·cuda·vllm
源于花海2 小时前
迁移学习相关的期刊和会议
人工智能·机器学习·迁移学习·期刊会议
DisonTangor4 小时前
DeepSeek-OCR 2: 视觉因果流
人工智能·开源·aigc·ocr·deepseek
薛定谔的猫19824 小时前
二十一、基于 Hugging Face Transformers 实现中文情感分析情感分析
人工智能·自然语言处理·大模型 训练 调优
发哥来了4 小时前
《AI视频生成技术原理剖析及金管道·图生视频的应用实践》
人工智能
数智联AI团队4 小时前
AI搜索引领开源大模型新浪潮,技术创新重塑信息检索未来格局
人工智能·开源
不懒不懒4 小时前
【线性 VS 逻辑回归:一篇讲透两种核心回归模型】
人工智能·机器学习
冰西瓜6005 小时前
从项目入手机器学习——(四)特征工程(简单特征探索)
人工智能·机器学习
Ryan老房5 小时前
未来已来-AI标注工具的下一个10年
人工智能·yolo·目标检测·ai
丝斯20115 小时前
AI学习笔记整理(66)——多模态大模型MOE-LLAVA
人工智能·笔记·学习