text-generation-webui加载codellama报错DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。

使用text-generation-webui加载codellama,报错:

python 复制代码
Traceback (most recent call last):

File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1353, in _get_module


return importlib.import_module("." + module_name, self.__name__)
File "D:\Anaconda\Anaconda\envs\codellama\lib\importlib_init_.py", line 126, in import_module


return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import

File "", line 1027, in _find_and_load

File "", line 1006, in _find_and_load_unlocked

File "", line 688, in _load_unlocked

File "", line 883, in exec_module

File "", line 241, in _call_with_frames_removed

File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\llama\modeling_llama.py", line 48, in


from flash_attn import flash_attn_func, flash_attn_varlen_func
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\flash_attn_init_.py", line 3, in


from flash_attn.flash_attn_interface import (
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\flash_attn\flash_attn_interface.py", line 8, in


import flash_attn_2_cuda as flash_attn_cuda
ImportError: DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "E:\模型\text-generation-webui\text-generation-webui\modules\ui_model_menu.py", line 209, in load_model_wrapper


shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "E:\模型\text-generation-webui\text-generation-webui\modules\models.py", line 85, in load_model


output = load_func_map[loader](model_name)
File "E:\模型\text-generation-webui\text-generation-webui\modules\models.py", line 155, in huggingface_loader


model = LoaderClass.from_pretrained(path_to_model, **params)
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained


model_class = _get_model_class(config, cls._model_mapping)
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 387, in _get_model_class


supported_models = model_mapping[type(config)]
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 740, in getitem


return self._load_attr_from_module(model_type, model_name)
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 754, in _load_attr_from_module


return getattribute_from_module(self._modules[module_name], attr)
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\models\auto\auto_factory.py", line 698, in getattribute_from_module


if hasattr(module, attr):
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1343, in getattr


module = self._get_module(self._class_to_module[name])
File "C:\Users\Ma\AppData\Roaming\Python\Python310\site-packages\transformers\utils\import_utils.py", line 1355, in _get_module


raise RuntimeError(
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):

DLL load failed while importing flash_attn_2_cuda: 找不到指定的模块。

一开始排查是以为transformers的版本不对,先确定了transformers的版本,transformers的版本应该大于4.35.0

把transformers升级为4.35.0后仍然报错

接着排查cuda和torch的版本

最后发现是cuda版本与torch版本不匹配

python 复制代码
>>> print(torch.version.cuda) # 检查CUDA版本
>>> 11.8

控制台运行nvcc --version :

输出:

python 复制代码
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

最后解决:

先卸载原本的torch:

python 复制代码
pip uninstall torch torchvision torchaudio

然后安装12.1的:

python 复制代码
pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/cu121/torch_stable.html

最后加载成功codellama

相关推荐
枫叶丹42 小时前
【HarmonyOS 6.0】CANN Kit 新增支持获取 AI 模型 Dump 维测数据功能详解
开发语言·人工智能·华为·信息可视化·harmonyos
计算机魔术师2 小时前
【职场观察 | 技术人处境】五一假期结束,职场两边同时加速——“简历热“和“优化潮“背后的结构性逻辑
人工智能·面试·职场和发展·cot 推理·技术人求职·ai替代逻辑
zhangfeng11337 小时前
国家超算中心 scnet.cn 跨用户文件分享流程总结 多个用户之间 文件共享 不需要反复下载上传
人工智能·语言模型·大模型
ting945200010 小时前
Tornado 全栈技术深度指南:从原理到实战
人工智能·python·架构·tornado
果汁华11 小时前
Browserbase Skills:让 Claude Agent 真正“看见“网页世界
人工智能·python
ZhengEnCi11 小时前
04-缩放点积注意力代码实现 💻
人工智能·python
HackTwoHub12 小时前
AI大模型网关存在SQL注入、附 POC 复现、影响版本LiteLLM 1.81.16~1.83.7(CVE-2026-42208)
数据库·人工智能·sql·网络安全·系统安全·网络攻击模型·安全架构
段一凡-华北理工大学12 小时前
【高炉炼铁领域炉温监测、预警、调控智能体设计与应用】~系列文章08:多模态数据融合:让数据更聪明
人工智能·python·高炉炼铁·ai赋能·工业智能体·高炉炉温
万粉变现经纪人12 小时前
如何解决 pip install llama-cpp-python 报错 未安装 CMake/Ninja 或 CPU 不支持 AVX 问题
开发语言·python·开源·aigc·pip·ai写作·llama
网络工程小王12 小时前
【LangChain 大模型6大调用指南】调用大模型篇
linux·运维·服务器·人工智能·学习