RuntimeError: CUDA error: device-side assert triggered

报错源码情况

复制代码
Traceback (most recent call last):
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 701, in <module>
    train_seq2seq( lr, 1, device)
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 676, in train_seq2seq
    Y_hat, _ = tf_net(X, dec_input, X_valid_len)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 517, in forward
    enc_outputs = self.encoder(enc_X, *args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 366, in forward
    to_pos = emb_data *  math.sqrt(self.num_hiddens)
             ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

第一步:启用同步模式,方便调试

复制代码
import os
# CUDA_LAUNCH_BLOCKING=1:启用同步模式,主机等待内核完成。这有助于调试和性能分析,但会降低整体效率。‌

# CUDA_LAUNCH_BLOCKING=0(默认):保持异步模式,允许主机与GPU并行执行,提升性能。‌

os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

增加后,报错信息如下:

复制代码
Traceback (most recent call last):
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 707, in <module>
    train_seq2seq( lr, 1, device)
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 682, in train_seq2seq
    Y_hat, _ = tf_net(X, dec_input, X_valid_len)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 523, in forward
    enc_outputs = self.encoder(enc_X, *args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 370, in forward
    emb_data = self.embedding(X)
               ^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 190, in forward
    return F.embedding(
           ^^^^^^^^^^^^
  File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/functional.py", line 2551, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 

错误明确指出是在 torch.nn.functional.embedding 调用时出错。:

结论:X 中包含非法 token ID(超出词表范围或为负数)

因为使用了SentencePiece,所以先验证控制符

复制代码
sp_model_path = './format_data/'
sp = spm.SentencePieceProcessor()
sp.load(sp_model_path + "spm_bpe.model")

print("Vocab size:", sp.vocab_size())
print("PAD ID:", sp.pad_id())
print("BOS ID:", sp.bos_id())
print("EOS ID:", sp.eos_id())
print("UNK ID:", sp.unk_id())

# 验证这些 ID 是否在合法范围内 [0, vocab_size)
assert 0 <= sp.pad_id() < sp.vocab_size()
assert 0 <= sp.bos_id() < sp.vocab_size()
assert 0 <= sp.eos_id() < sp.vocab_size()
assert 0 <= sp.unk_id() < sp.vocab_size()

然后发现pad_id 为-1

发现原训练语句有误:

复制代码
spm_train \
  --input=all_data_no_split.txt \
  --model_prefix=spm_bpe \
  --vocab_size=32000 \
  --model_type=bpe \
  --user_defined_symbols="<pad>,<bos>,<eos>" \ 
  --unk_id=0 \
  --bos_id=1 \
  --eos_id=2 \
  --pad_id=3 \
  --num_threads=16

问题出在: --user_defined_symbols="<pad>,<bos>,<eos>" \

删除后重新训练词表,然后在运行就没有问题了

相关推荐
碧海银沙音频科技研究院4 小时前
1-1杰理蓝牙SOC的UI配置开发方法
人工智能·深度学习·算法
龙文浩_7 小时前
AI梯度下降与PyTorch张量操作技术指南
人工智能·pytorch·python·深度学习·神经网络·机器学习·自然语言处理
清空mega7 小时前
动手学深度学习——样式迁移
人工智能·深度学习
MRDONG18 小时前
Prompt Engineering进阶指南
人工智能·深度学习·神经网络·机器学习·自然语言处理
QQ676580088 小时前
基于深度学习YOLO的苹果采摘点图像识别 苹果枝条分割识别 苹果分割检测 苹果茎叶分割识别 果园自动化采摘设备目标识别算法第10386期
深度学习·yolo·自动化·苹果采摘点图像·苹果枝条分割·苹果茎叶分割·果园自动化采摘设备
碧海银沙音频科技研究院8 小时前
虚拟机ubuntu与windows共享文件夹(Samba共享)解决WSL加载SI工程满卡问题
人工智能·深度学习·算法
小江的记录本9 小时前
【Transformer架构】Transformer架构核心知识体系(包括自注意力机制、多头注意力、Encoder-Decoder结构)
java·人工智能·后端·python·深度学习·架构·transformer
AI先驱体验官9 小时前
债小白分析:债务优化服务的新变量、AI能否带来行业升级
大数据·人工智能·深度学习·重构·aigc
SomeB1oody9 小时前
【Python深度学习】2.1. 卷积神经网络(CNN)模型理论(基础):卷积运算、池化、ReLU函数
开发语言·人工智能·python·深度学习·机器学习·cnn
sp_fyf_202412 小时前
【大语言模型】 WizardLM:赋能大型预训练语言模型以遵循复杂指令
人工智能·深度学习·神经网络·语言模型·自然语言处理