大模型训练中CUDA out of memory的几种解决方法

常用:

  1. 减小batch size

  2. 如果需要较大的batch size来达到预期训练指标效果,则使用gradient accumulation来累积梯度,达到等价效果的同时,占用显存小;

  3. 使用低精度做训练。(mixed precision training). FP16+FP32

  4. 减小训练数据样本的长度

  5. Model并行或者Pipeline并行,把模型拆分至多卡

不常用:

  1. 释放闲置内存;torch.cuda.empty_cache()

  2. 缓解PyTorch的显存碎片;PYTORCH_CUDA_ALLOC_CONF和max_split_size_mb

7.使用更小的模型

  1. 增大GPU显存

  2. 使用nvidia-smi看到该GPU上的其他进程;可杀则杀;

Perplexity 查询结果:

Here are several solutions to resolve the "CUDA out of memory" error when training large language models (LLMs) using SFT (Supervised Fine-Tuning):

  1. Reduce batch size: Lowering the batch size during training can significantly reduce the memory requirements. For example, you could set the batch size to 1 and use gradient accumulation to accumulate gradients over multiple batches before updating the model weights.
  2. Use gradient accumulation: Gradient accumulation allows you to accumulate gradients over multiple batches before updating the model weights. This effectively increases the batch size without increasing the memory usage. Be sure to divide the loss by the number of gradient accumulation steps to maintain the correct gradient scale.
  3. Enable mixed precision training: Training the model in mixed precision (FP16) can reduce the memory usage compared to full precision (FP32). PyTorch provides the torch.cuda.amp package for automatic mixed precision training.
  4. Reduce input sequence length: If your input texts are long, consider truncating or splitting them into shorter sequences to reduce the memory usage during tokenization and model inference.
  5. Free GPU memory: Call torch.cuda.empty_cache() to free up any unused GPU memory allocated by PyTorch. This might not help much, but it's worth a try.
  6. Set PYTORCH_CUDA_ALLOC_CONF environment variable: You can try setting the PYTORCH_CUDA_ALLOC_CONF environment variable with max_split_size_mb to avoid memory fragmentation. Experiment with different values based on your GPU memory.
  7. Use a smaller model: If possible, consider using a smaller version of the LLM that has fewer parameters. Smaller models generally require less memory during training.
  8. Increase GPU memory: If you have access to a GPU with more memory, consider using it to accommodate the memory requirements of the LLM training.
  9. Distribute training across multiple GPUs: Splitting the training across multiple GPUs can reduce the memory usage per GPU. However, this approach requires additional setup and coordination.
  10. Monitor GPU memory usage: Use nvidia-smi to monitor the GPU memory usage and identify any other processes consuming GPU memory. Terminate unnecessary processes to free up memory
相关推荐
鼓掌MVP6 分钟前
边缘计算应用实践心得
人工智能·边缘计算
zdy12635746886 分钟前
python43天
python·深度学习·机器学习
QYR_118 分钟前
宠物车载安全座椅市场报告:解读行业趋势与投资前景
大数据·人工智能
wswlqsss11 分钟前
第四十五天打卡
人工智能·深度学习
Likeadust15 分钟前
视频汇聚平台EasyCVR“明厨亮灶”方案筑牢旅游景区餐饮安全品质防线
网络·人工智能·音视频
天翼云开发者社区29 分钟前
总决赛定档!“天翼云息壤杯”高校AI大赛巅峰之战即将打响!
人工智能·ai大赛
亚马逊云开发者39 分钟前
Amazon Bedrock 助力 SolveX.AI 构建智能解题 Agent,打造头部教育科技应用
人工智能
新德通科技1 小时前
新德通科技:以创新驱动光通信一体化发展,赋能全球智能互联
人工智能
CoovallyAIHub1 小时前
AI+无人机如何守护濒危物种?YOLOv8实现95%精准识别
深度学习·算法·计算机视觉
__星辰大海__1 小时前
NeRF PyTorch 源码解读 - NDC空间
人工智能