


RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.64 GiB total capacity; 21.55 GiB already allocated; 17.75 MiB free; 21.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 23.64 GiB total capacity; 17.64 GiB already allocated; 118.25 MiB free; 19.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF翻译这段话
运行时错误:CUDA内存不足。尝试分配 2.00 GiB 的内存(GPU 0;总容量为 23.64 GiB;已分配 17.64 GiB;可用内存为 118.25 MiB;PyTorch 已保留 19.15 GiB 总内存)。如果保留的内存远大于分配的内存,试着设置 `max_split_size_mb` 以避免内存碎片化。有关内存管理和 `PYTORCH_CUDA_ALLOC_CONF` 的更多信息,请参阅文档。

batch size 改小 4→2
硬件跟不上,深度学习真玩不起
无卡启用 V100

有卡启用 V100
