Finetuning with Together AI — The Easiest SFT Tutorial

This streamlined tutorial guides you through the finetuning process with Together AI. While the official tutorial splits the process across different pages, this guide consolidates everything into a single, easy-to-follow resource.

Note:

  • All commands should be entered in the terminal.
  • The minimal training cost is $5, even with just one entry in the training data.

1. Authentication

Start by setting your Together AI API key:

复制代码
export TOGETHER_API_KEY= <your_api>

2. Prepare Your Dataset

Construct your dataset according to the required data format. You can use either Conversational Data or Instruction Data formats.

Conversational Data Example

复制代码
{
  "messages": [
    {"role": "system", "content": "This is a system prompt."},
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
    {"role": "user", "content": "Can you explain machine learning?"},
    {"role": "assistant", "content": "Machine learning is..."}
  ]
}

Instruction Data Example

复制代码
{"prompt": "...", "completion": "..."}
{"prompt": "...", "completion": "..."}

3. Upload Your Dataset and Obtain File ID

Upload your dataset using the following command:

复制代码
together files upload <file_name>

Replace <file_name> with the name of your dataset file (e.g., dataset.jsonl).

Upon successful upload, you will receive a response similar to:

复制代码
{
    "id": "file-123456",
    "object": "file",
    "created_at": 1734574470,
    "purpose": "fine-tune",
    "filename": "filename.jsonl",
    "bytes": 0,
    "line_count": 0,
    "processed": false,
    "FileType": "jsonl"
}

Action: Note down the id (e.g., file-123456) for use in the next steps.

4. Select a Model to Fine-Tune

Fine-tuning ModelsA list of all the models available for fine-tuning.docs.together.ai

Use the name listed under the "Model String for API" column. For example: "meta-llama/Llama-3.3--70B-Instruct-Reference"

5. Create a Finetuning Task

Initiate the finetuning process with the following command:

复制代码
together fine-tuning create - training-file file-123456 - model meta-llama/Llama-3.3–70B-Instruct-Reference

Replace:

  • file-123456 with your actual file ID.
  • meta-llama/Llama-3.3--70B-Instruct-Reference with your chosen model string.

If the submission is successful, you will see a response similar to:

复制代码
Submitting a fine-tuning job with the following parameters:
FinetuneRequest(
    training_file='file-123456',
    validation_file='',
    model='meta-llama/Llama-3.3–70B-Instruct-Reference',
    n_epochs=1,
    learning_rate=1e-05,
    lr_scheduler=FinetuneLRScheduler(lr_scheduler_type='linear', lr_scheduler_args=FinetuneLinearLRSchedulerArgs(min_lr_ratio=0.0)),
    warmup_ratio=0.0,
    max_grad_norm=1.0,
    weight_decay=0.0,
    n_checkpoints=1,
    n_evals=0,
    batch_size=32,
    suffix=None,
    wandb_key=None,
    wandb_base_url=None,
    wandb_project_name=None,
    wandb_name=None,
    training_type=LoRATrainingType(type='Lora', lora_r=8, lora_alpha=16, lora_dropout=0.0, lora_trainable_modules='all-linear'),
    train_on_inputs='auto'
)
Successfully submitted a fine-tuning job ft-c1cce2b0-1a90-47e4-8e84-46f76d2c3dcb at 12/19/2024, 10:16:38

Action: Note down the fine-tuning job ID (e.g., ft-c1cce2b0-1a90-47e4-8e84-46f76d2c3dcb).

6. Monitor and Use Your Fine-Tuned Model

Once the finetuning job is complete, you can use your fine-tuned model as follows:

Example in Python

复制代码
from together import Together

client = Together()

response = client.chat.completions.create(
    model="check your model name in your together AI dashboard",
    messages=[{"role": "user", "content": "Could you give me a like?"}],
)
print(response.choices[0].message.content)
相关推荐
zzywxc78713 分钟前
AI 正在深度重构软件开发的底层逻辑和全生命周期,从技术演进、流程重构和未来趋势三个维度进行系统性分析
java·大数据·开发语言·人工智能·spring
超龄超能程序猿16 分钟前
(1)机器学习小白入门 YOLOv:从概念到实践
人工智能·机器学习
大熊背26 分钟前
图像处理专业书籍以及网络资源总结
人工智能·算法·microsoft
江理不变情32 分钟前
图像质量对比感悟
c++·人工智能
张较瘦_3 小时前
[论文阅读] 人工智能 + 软件工程 | 需求获取访谈中LLM生成跟进问题研究:来龙去脉与创新突破
论文阅读·人工智能
一 铭3 小时前
AI领域新趋势:从提示(Prompt)工程到上下文(Context)工程
人工智能·语言模型·大模型·llm·prompt
麻雀无能为力7 小时前
CAU数据挖掘实验 表分析数据插件
人工智能·数据挖掘·中国农业大学
时序之心7 小时前
时空数据挖掘五大革新方向详解篇!
人工智能·数据挖掘·论文·时间序列
.30-06Springfield8 小时前
人工智能概念之七:集成学习思想(Bagging、Boosting、Stacking)
人工智能·算法·机器学习·集成学习
说私域9 小时前
基于开源AI智能名片链动2+1模式S2B2C商城小程序的超级文化符号构建路径研究
人工智能·小程序·开源