报错源码情况
Traceback (most recent call last):
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 701, in <module>
train_seq2seq( lr, 1, device)
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 676, in train_seq2seq
Y_hat, _ = tf_net(X, dec_input, X_valid_len)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 517, in forward
enc_outputs = self.encoder(enc_X, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 366, in forward
to_pos = emb_data * math.sqrt(self.num_hiddens)
~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
第一步:启用同步模式,方便调试
import os
# CUDA_LAUNCH_BLOCKING=1:启用同步模式,主机等待内核完成。这有助于调试和性能分析,但会降低整体效率。
# CUDA_LAUNCH_BLOCKING=0(默认):保持异步模式,允许主机与GPU并行执行,提升性能。
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
增加后,报错信息如下:
Traceback (most recent call last):
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 707, in <module>
train_seq2seq( lr, 1, device)
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 682, in train_seq2seq
Y_hat, _ = tf_net(X, dec_input, X_valid_len)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 523, in forward
enc_outputs = self.encoder(enc_X, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/proDB/project-ml/nlp/python/wmt/run_tf_1.py", line 370, in forward
emb_data = self.embedding(X)
^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 190, in forward
return F.embedding(
^^^^^^^^^^^^
File "/home/pyUser/anaconda3/envs/pytorch/lib/python3.12/site-packages/torch/nn/functional.py", line 2551, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
错误明确指出是在 torch.nn.functional.embedding 调用时出错。:
结论:X 中包含非法 token ID(超出词表范围或为负数)
因为使用了SentencePiece,所以先验证控制符
sp_model_path = './format_data/'
sp = spm.SentencePieceProcessor()
sp.load(sp_model_path + "spm_bpe.model")
print("Vocab size:", sp.vocab_size())
print("PAD ID:", sp.pad_id())
print("BOS ID:", sp.bos_id())
print("EOS ID:", sp.eos_id())
print("UNK ID:", sp.unk_id())
# 验证这些 ID 是否在合法范围内 [0, vocab_size)
assert 0 <= sp.pad_id() < sp.vocab_size()
assert 0 <= sp.bos_id() < sp.vocab_size()
assert 0 <= sp.eos_id() < sp.vocab_size()
assert 0 <= sp.unk_id() < sp.vocab_size()
然后发现pad_id 为-1
发现原训练语句有误:
spm_train \
--input=all_data_no_split.txt \
--model_prefix=spm_bpe \
--vocab_size=32000 \
--model_type=bpe \
--user_defined_symbols="<pad>,<bos>,<eos>" \
--unk_id=0 \
--bos_id=1 \
--eos_id=2 \
--pad_id=3 \
--num_threads=16
问题出在: --user_defined_symbols="<pad>,<bos>,<eos>" \
删除后重新训练词表,然后在运行就没有问题了