参考了很多文章, 针对于mac下的对于模型的一些微调训练, 进行一个总结, 也顺便记录一下.
首先, 我们需要准备以下内容.
为了快速测试,本文使用 qwen2.5 0.5b 模型
前置条件
- Mac(M系列 本人使用M4 Pro)
- Python 3.9 或更高版本
- 至少 16GB 内存(推荐 24GB+)
详细步骤
依赖准备
bash
# 安装 MLX 相关包
pip install mlx
pip install mlx-lm
下载并转换模型
- 创建工作目录
shell
mkdir qwen2.5-0.5b-instruct && cd qwen2.5-0.5b-instruct
- 下载模型
bash
a. 如遇通往 huggingface 网络不佳,国内可从 ModelScope 下载模型,访问 https://modelscope.cn/models/Qwen/Qwen2.5-0.5B-Instruct/files
,直接用浏览器下载全部文件到工作目录即可。
b. 通过HF命令下载
#安装依赖
pip install -U huggingface_hub
#设置环境变量
export HF_ENDPOINT=https://hf-mirror.com
#下载模型,保存至qwen2.5-0.5B目录
huggingface-cli download --resume-download Qwen/Qwen2.5-0.5B-Instruct --local-dir qwen2.5-0.5B
- 转换模型
shell
mlx_lm.convert --hf-path /Users/adam/Downloads/qwen2.5-0.5b-instruct --mlx-path ./mlx_qwen_small --dtype float16
参数说明:
- --hf-path: Hugging Face 格式模型路径
- --mlx-path: 转换后的 MLX 模型保存路径
- --dtype: 模型精度,使用 float16 可以减少内存占用
准备训练数据集
MLX支持三种格式的数据集
Completion
json
{
"prompt": "What is the capital of France?",
"completion": "Paris."
}
Chat
json
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello."
},
{
"role": "assistant",
"content": "How can I assistant you today."
}
]
}
Text
json
{
"text": "This is an example for the model."
}
本文为了快速, 使用以下内容
shell
# 1. 创建数据目录
mkdir data
# 2. 创建训练数据文件
cat > data/train.jsonl << 'EOL'
{"text": "<|im_start|>user\n你是谁?<|im_end|>\n<|im_start|>assistant\n我是大霸王,一个独属于的AI助手,很高兴能帮助你。<|im_end|>"}
{"text": "<|im_start|>user\n请介绍一下你自己<|im_end|>\n<|im_start|>assistant\n我是大霸王,一个独属于的AI助手,很高兴能帮助你。<|im_end|>"}
{"text": "<|im_start|>user\n你叫什么名字?<|im_end|>\n<|im_start|>assistant\n我是大霸王,一个独属于的AI助手,很高兴能帮助你。<|im_end|>"}
{"text": "<|im_start|>user\n你的名字是什么?<|im_end|>\n<|im_start|>assistant\n我是大霸王,一个独属于的AI助手,很高兴能帮助你。<|im_end|>"}
EOL
# 3. 创建验证数据
cat > data/valid.jsonl << 'EOL'
{"text": "<|im_start|>user\n 1+ 1 等于多少<|im_end|>\n<|im_start|>assistant\n 等于2。<|im_end|>"}
{"text": "<|im_start|>user\n 中国首都在哪里<|im_end|>\n<|im_start|>assistant\n 北京。<|im_end|>"}
{"text": "<|im_start|>user\n你是谁?<|im_end|>\n<|im_start|>assistant\n我是大霸王,一个独属于的AI助手,很高兴能帮助你。<|im_end|>"}
{"text": "<|im_start|>user\n地球是圆的吗?<|im_end|>\n<|im_start|>assistant\n是的。<|im_end|>"}
EOL
数据格式要求:
- 必须是 JSONL 格式(注意转义)
- 每行一个完整的 JSON
- 必须包含 text 字段
- 使用模型兼容的格式, Qwen 的格式:
用户输入:<|im_start|>user\n问题<|im_end|>
助手回答:<|im_start|>assistant\n回答<|im_end|>
- 必须有 train.jsonl 和 valid.jsonl
- 训练模型
shell
mlx_lm.lora --model /Users/adam/Downloads/qwen2.5-0.5b-instruct --train --data ./data
可选参数
shell
--batch-size 1
--num-layers 4
--iters 300
--fine-tune-type lora
--learning-rate 1e-5
--steps-per-report 10
参数说明:
- --model: MLX 格式的模型路径
- --data: 训练数据目录,需包含 train.jsonl 和 valid.jsonl
- --train: 启用训练模式
- --batch-size: 批次大小,Mac 上建议保持为 1-2。 测试了 mac 大 batch 会导致内存访问模式不够优化。
- --num-layers: 少层数以使用 --lora-layers 进行微调。 默认值为 16,因此您可以尝试 8 或 4。这会减少反向传播所需的内存量。 如果您使用大量数据进行微调,它还可能会降低微调模型的质量。
- --iters: 训练轮次 开始可以尝试 300-1000测试效果
- --fine-tune-type: 使用 LoRA 微调方式
- --learning-rate: 学习率
- --steps-per-report: 每 10 步报告一次训练状态
shell
qwen2.5-0.5B mlx_lm.lora --model . --train --data ./data
Loading pretrained model
Loading datasets
Training
Trainable parameters: 0.109% (0.541M/494.033M)
Starting training..., iters: 1000
Iter 1: Val loss 6.547, Val took 0.132s
Iter 10: Train loss 4.668, Learning Rate 1.000e-05, It/sec 11.340, Tokens/sec 1304.115, Trained Tokens 1150, Peak mem 1.593 GB
Iter 20: Train loss 2.525, Learning Rate 1.000e-05, It/sec 12.599, Tokens/sec 1448.901, Trained Tokens 2300, Peak mem 1.593 GB
Iter 30: Train loss 1.444, Learning Rate 1.000e-05, It/sec 12.604, Tokens/sec 1449.440, Trained Tokens 3450, Peak mem 1.593 GB
Iter 40: Train loss 0.806, Learning Rate 1.000e-05, It/sec 11.786, Tokens/sec 1355.342, Trained Tokens 4600, Peak mem 1.593 GB
Iter 50: Train loss 0.429, Learning Rate 1.000e-05, It/sec 11.706, Tokens/sec 1346.242, Trained Tokens 5750, Peak mem 1.593 GB
Iter 60: Train loss 0.253, Learning Rate 1.000e-05, It/sec 11.667, Tokens/sec 1341.746, Trained Tokens 6900, Peak mem 1.593 GB
Iter 70: Train loss 0.152, Learning Rate 1.000e-05, It/sec 11.335, Tokens/sec 1303.491, Trained Tokens 8050, Peak mem 1.593 GB
Iter 80: Train loss 0.098, Learning Rate 1.000e-05, It/sec 11.649, Tokens/sec 1339.691, Trained Tokens 9200, Peak mem 1.593 GB
Iter 90: Train loss 0.078, Learning Rate 1.000e-05, It/sec 11.645, Tokens/sec 1339.119, Trained Tokens 10350, Peak mem 1.593 GB
Iter 100: Train loss 0.071, Learning Rate 1.000e-05, It/sec 11.666, Tokens/sec 1341.584, Trained Tokens 11500, Peak mem 1.593 GB
Iter 100: Saved adapter weights to adapters/adapters.safetensors and adapters/0000100_adapters.safetensors.
Iter 110: Train loss 0.068, Learning Rate 1.000e-05, It/sec 11.566, Tokens/sec 1330.089, Trained Tokens 12650, Peak mem 1.595 GB
Iter 120: Train loss 0.065, Learning Rate 1.000e-05, It/sec 11.667, Tokens/sec 1341.673, Trained Tokens 13800, Peak mem 1.595 GB
Iter 130: Train loss 0.062, Learning Rate 1.000e-05, It/sec 11.791, Tokens/sec 1355.925, Trained Tokens 14950, Peak mem 1.595 GB
Iter 140: Train loss 0.062, Learning Rate 1.000e-05, It/sec 11.773, Tokens/sec 1353.905, Trained Tokens 16100, Peak mem 1.595 GB
Iter 150: Train loss 0.061, Learning Rate 1.000e-05, It/sec 11.623, Tokens/sec 1336.608, Trained Tokens 17250, Peak mem 1.595 GB
Iter 160: Train loss 0.059, Learning Rate 1.000e-05, It/sec 11.675, Tokens/sec 1342.646, Trained Tokens 18400, Peak mem 1.595 GB
Iter 170: Train loss 0.059, Learning Rate 1.000e-05, It/sec 11.337, Tokens/sec 1303.766, Trained Tokens 19550, Peak mem 1.595 GB
Iter 180: Train loss 0.058, Learning Rate 1.000e-05, It/sec 11.423, Tokens/sec 1313.621, Trained Tokens 20700, Peak mem 1.595 GB
Iter 190: Train loss 0.058, Learning Rate 1.000e-05, It/sec 11.641, Tokens/sec 1338.693, Trained Tokens 21850, Peak mem 1.595 GB
Iter 200: Val loss 2.110, Val took 0.039s
Iter 200: Train loss 0.056, Learning Rate 1.000e-05, It/sec 104.703, Tokens/sec 12040.803, Trained Tokens 23000, Peak mem 1.595 GB
Iter 200: Saved adapter weights to adapters/adapters.safetensors and adapters/0000200_adapters.safetensors.
Iter 210: Train loss 0.056, Learning Rate 1.000e-05, It/sec 10.879, Tokens/sec 1251.113, Trained Tokens 24150, Peak mem 1.595 GB
Iter 220: Train loss 0.056, Learning Rate 1.000e-05, It/sec 10.942, Tokens/sec 1258.275, Trained Tokens 25300, Peak mem 1.595 GB
Iter 230: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.066, Tokens/sec 1272.559, Trained Tokens 26450, Peak mem 1.595 GB
Iter 240: Train loss 0.056, Learning Rate 1.000e-05, It/sec 10.945, Tokens/sec 1258.660, Trained Tokens 27600, Peak mem 1.595 GB
Iter 250: Train loss 0.056, Learning Rate 1.000e-05, It/sec 10.987, Tokens/sec 1263.526, Trained Tokens 28750, Peak mem 1.595 GB
Iter 260: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.001, Tokens/sec 1265.074, Trained Tokens 29900, Peak mem 1.595 GB
Iter 270: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.569, Tokens/sec 1330.402, Trained Tokens 31050, Peak mem 1.595 GB
Iter 280: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.647, Tokens/sec 1339.368, Trained Tokens 32200, Peak mem 1.595 GB
Iter 290: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.646, Tokens/sec 1339.282, Trained Tokens 33350, Peak mem 1.595 GB
Iter 300: Train loss 0.055, Learning Rate 1.000e-05, It/sec 11.712, Tokens/sec 1346.910, Trained Tokens 34500, Peak mem 1.595 GB
Iter 300: Saved adapter weights to adapters/adapters.safetensors and adapters/0000300_adapters.safetensors.
Iter 310: Train loss 0.054, Learning Rate 1.000e-05, It/sec 11.432, Tokens/sec 1314.625, Trained Tokens 35650, Peak mem 1.595 GB
Iter 320: Train loss 0.054, Learning Rate 1.000e-05, It/sec 11.817, Tokens/sec 1358.967, Trained Tokens 36800, Peak mem 1.595 GB
Iter 330: Train loss 0.053, Learning Rate 1.000e-05, It/sec 11.315, Tokens/sec 1301.190, Trained Tokens 37950, Peak mem 1.595 GB
Iter 340: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.842, Tokens/sec 1246.787, Trained Tokens 39100, Peak mem 1.595 GB
Iter 350: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.854, Tokens/sec 1248.253, Trained Tokens 40250, Peak mem 1.595 GB
Iter 360: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.692, Tokens/sec 1229.629, Trained Tokens 41400, Peak mem 1.595 GB
Iter 370: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.946, Tokens/sec 1258.823, Trained Tokens 42550, Peak mem 1.595 GB
Iter 380: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.999, Tokens/sec 1264.903, Trained Tokens 43700, Peak mem 1.595 GB
Iter 390: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.881, Tokens/sec 1251.332, Trained Tokens 44850, Peak mem 1.595 GB
Iter 400: Val loss 2.203, Val took 0.040s
Iter 400: Train loss 0.052, Learning Rate 1.000e-05, It/sec 100.492, Tokens/sec 11556.603, Trained Tokens 46000, Peak mem 1.595 GB
Iter 400: Saved adapter weights to adapters/adapters.safetensors and adapters/0000400_adapters.safetensors.
Iter 410: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.749, Tokens/sec 1236.186, Trained Tokens 47150, Peak mem 1.595 GB
Iter 420: Train loss 0.053, Learning Rate 1.000e-05, It/sec 10.981, Tokens/sec 1262.852, Trained Tokens 48300, Peak mem 1.595 GB
Iter 430: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.996, Tokens/sec 1264.526, Trained Tokens 49450, Peak mem 1.595 GB
Iter 440: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.462, Tokens/sec 1203.160, Trained Tokens 50600, Peak mem 1.595 GB
Iter 450: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.548, Tokens/sec 1213.038, Trained Tokens 51750, Peak mem 1.595 GB
Iter 460: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.014, Tokens/sec 1151.576, Trained Tokens 52900, Peak mem 1.595 GB
Iter 470: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.899, Tokens/sec 1253.341, Trained Tokens 54050, Peak mem 1.595 GB
Iter 480: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.266, Tokens/sec 1180.544, Trained Tokens 55200, Peak mem 1.595 GB
Iter 490: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.700, Tokens/sec 1230.494, Trained Tokens 56350, Peak mem 1.595 GB
Iter 500: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.384, Tokens/sec 1194.169, Trained Tokens 57500, Peak mem 1.595 GB
Iter 500: Saved adapter weights to adapters/adapters.safetensors and adapters/0000500_adapters.safetensors.
Iter 510: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.592, Tokens/sec 1218.107, Trained Tokens 58650, Peak mem 1.595 GB
Iter 520: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.104, Tokens/sec 1161.993, Trained Tokens 59800, Peak mem 1.595 GB
Iter 530: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.479, Tokens/sec 1090.142, Trained Tokens 60950, Peak mem 1.595 GB
Iter 540: Train loss 0.052, Learning Rate 1.000e-05, It/sec 9.765, Tokens/sec 1122.940, Trained Tokens 62100, Peak mem 1.595 GB
Iter 550: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.566, Tokens/sec 1215.060, Trained Tokens 63250, Peak mem 1.595 GB
Iter 560: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.798, Tokens/sec 1126.759, Trained Tokens 64400, Peak mem 1.595 GB
Iter 570: Train loss 0.052, Learning Rate 1.000e-05, It/sec 10.160, Tokens/sec 1168.453, Trained Tokens 65550, Peak mem 1.595 GB
Iter 580: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.858, Tokens/sec 1248.714, Trained Tokens 66700, Peak mem 1.595 GB
Iter 590: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.173, Tokens/sec 1169.863, Trained Tokens 67850, Peak mem 1.595 GB
Iter 600: Val loss 2.305, Val took 0.038s
Iter 600: Train loss 0.051, Learning Rate 1.000e-05, It/sec 102.223, Tokens/sec 11755.696, Trained Tokens 69000, Peak mem 1.595 GB
Iter 600: Saved adapter weights to adapters/adapters.safetensors and adapters/0000600_adapters.safetensors.
Iter 610: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.866, Tokens/sec 1249.611, Trained Tokens 70150, Peak mem 1.595 GB
Iter 620: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.937, Tokens/sec 1142.715, Trained Tokens 71300, Peak mem 1.595 GB
Iter 630: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.836, Tokens/sec 1246.141, Trained Tokens 72450, Peak mem 1.595 GB
Iter 640: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.867, Tokens/sec 1134.703, Trained Tokens 73600, Peak mem 1.595 GB
Iter 650: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.841, Tokens/sec 1131.664, Trained Tokens 74750, Peak mem 1.595 GB
Iter 660: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.874, Tokens/sec 1135.543, Trained Tokens 75900, Peak mem 1.595 GB
Iter 670: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.544, Tokens/sec 1097.522, Trained Tokens 77050, Peak mem 1.595 GB
Iter 680: Train loss 0.051, Learning Rate 1.000e-05, It/sec 8.804, Tokens/sec 1012.472, Trained Tokens 78200, Peak mem 1.595 GB
Iter 690: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.580, Tokens/sec 1216.668, Trained Tokens 79350, Peak mem 1.595 GB
Iter 700: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.408, Tokens/sec 1196.943, Trained Tokens 80500, Peak mem 1.595 GB
Iter 700: Saved adapter weights to adapters/adapters.safetensors and adapters/0000700_adapters.safetensors.
Iter 710: Train loss 0.051, Learning Rate 1.000e-05, It/sec 9.885, Tokens/sec 1136.745, Trained Tokens 81650, Peak mem 1.595 GB
Iter 720: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.837, Tokens/sec 1246.281, Trained Tokens 82800, Peak mem 1.595 GB
Iter 730: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.648, Tokens/sec 1224.525, Trained Tokens 83950, Peak mem 1.595 GB
Iter 740: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.273, Tokens/sec 1181.361, Trained Tokens 85100, Peak mem 1.595 GB
Iter 750: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.577, Tokens/sec 1216.381, Trained Tokens 86250, Peak mem 1.595 GB
Iter 760: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.619, Tokens/sec 1221.151, Trained Tokens 87400, Peak mem 1.595 GB
Iter 770: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.793, Tokens/sec 1241.165, Trained Tokens 88550, Peak mem 1.595 GB
Iter 780: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.861, Tokens/sec 1248.996, Trained Tokens 89700, Peak mem 1.595 GB
Iter 790: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.612, Tokens/sec 1220.407, Trained Tokens 90850, Peak mem 1.595 GB
Iter 800: Val loss 2.202, Val took 0.043s
Iter 800: Train loss 0.050, Learning Rate 1.000e-05, It/sec 108.434, Tokens/sec 12469.904, Trained Tokens 92000, Peak mem 1.595 GB
Iter 800: Saved adapter weights to adapters/adapters.safetensors and adapters/0000800_adapters.safetensors.
Iter 810: Train loss 0.050, Learning Rate 1.000e-05, It/sec 9.531, Tokens/sec 1096.094, Trained Tokens 93150, Peak mem 1.595 GB
Iter 820: Train loss 0.050, Learning Rate 1.000e-05, It/sec 8.534, Tokens/sec 981.441, Trained Tokens 94300, Peak mem 1.595 GB
Iter 830: Train loss 0.050, Learning Rate 1.000e-05, It/sec 8.787, Tokens/sec 1010.546, Trained Tokens 95450, Peak mem 1.595 GB
Iter 840: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.538, Tokens/sec 1211.888, Trained Tokens 96600, Peak mem 1.595 GB
Iter 850: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.315, Tokens/sec 1186.183, Trained Tokens 97750, Peak mem 1.595 GB
Iter 860: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.695, Tokens/sec 1229.968, Trained Tokens 98900, Peak mem 1.595 GB
Iter 870: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.228, Tokens/sec 1176.240, Trained Tokens 100050, Peak mem 1.595 GB
Iter 880: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.888, Tokens/sec 1252.143, Trained Tokens 101200, Peak mem 1.595 GB
Iter 890: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.984, Tokens/sec 1263.211, Trained Tokens 102350, Peak mem 1.595 GB
Iter 900: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.904, Tokens/sec 1253.982, Trained Tokens 103500, Peak mem 1.595 GB
Iter 900: Saved adapter weights to adapters/adapters.safetensors and adapters/0000900_adapters.safetensors.
Iter 910: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.024, Tokens/sec 1152.714, Trained Tokens 104650, Peak mem 1.595 GB
Iter 920: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.605, Tokens/sec 1219.615, Trained Tokens 105800, Peak mem 1.595 GB
Iter 930: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.330, Tokens/sec 1188.001, Trained Tokens 106950, Peak mem 1.595 GB
Iter 940: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.429, Tokens/sec 1199.383, Trained Tokens 108100, Peak mem 1.595 GB
Iter 950: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.601, Tokens/sec 1219.159, Trained Tokens 109250, Peak mem 1.595 GB
Iter 960: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.405, Tokens/sec 1196.538, Trained Tokens 110400, Peak mem 1.595 GB
Iter 970: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.291, Tokens/sec 1183.441, Trained Tokens 111550, Peak mem 1.595 GB
Iter 980: Train loss 0.051, Learning Rate 1.000e-05, It/sec 10.236, Tokens/sec 1177.086, Trained Tokens 112700, Peak mem 1.595 GB
Iter 990: Train loss 0.050, Learning Rate 1.000e-05, It/sec 10.626, Tokens/sec 1221.941, Trained Tokens 113850, Peak mem 1.595 GB
Iter 1000: Val loss 2.246, Val took 0.039s
Iter 1000: Train loss 0.050, Learning Rate 1.000e-05, It/sec 100.162, Tokens/sec 11518.588, Trained Tokens 115000, Peak mem 1.595 GB
Iter 1000: Saved adapter weights to adapters/adapters.safetensors and adapters/0001000_adapters.safetensors.
Saved final weights to adapters/adapters.safetensors.
测试模型
shell
mlx_lm.generate --model . --adapter-path adapters \
--prompt "<|im_start|>user\n你是谁?<|im_end|>" --max-tokens 100 --ignore-chat-template
合并模型
shell
mlx_lm.fuse --model /Users/ai/Documents/model/qwen2.5-0.5B --adapter-path adapters --save-path qwen2-0.5b-train
部署到本地ollama
- 准备Modelfile, 保存在模型同级目录
python
FROM .
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER repeat_penalty 1.05
PARAMETER top_k 20
TEMPLATE """{{ if and .First .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
{{ .Response }}"""
# set the system message
SYSTEM """
You are a helpful assistant.
"""
- ollama转换模型
shell
ollama create -f Modelfile qwen2-0.5b-train
此时即可在ollama中运行微调好的模型.
shell
ollama run qwen2-0.5b-train
>>> 你是谁?
我是大霸王,一个乖巧可爱的AI助手,很高兴能帮助你。