昇腾910b部署Chatglm3-6b进行流式输出【pytorch框架】NPU推理

文章目录

准备阶段

避坑阶段

  • 每个人的服务器都不一样,在ChatGLM3/issues中别人只需要修改指定驱动,但是我的不行
  • 删除模型文件包中的model.safetensors.index.json,否则加载模型时会自动加载safetensors文件,而不加载bin文件
bash 复制代码
/home/anaconda3/envs/sakura/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it, if you need to enable torch.jit.script, please do not use transfer_to_npu.
  warnings.warn(msg, RuntimeWarning)
Loading checkpoint shards:   0%|                                                                                                                                     | 0/7 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/HwHiAiUser/work/ChatGLM3/basic_demo/cli_demo.py", line 22, in <module>
    model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True).npu().eval()
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    return model_class.from_pretrained(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3187, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3560, in _load_pretrained_model
    state_dict = load_state_dict(shard_file)
  File "/home/anaconda3/envs/sakura/lib/python3.9/site-packages/transformers/modeling_utils.py", line 467, in load_state_dict
    with safe_open(checkpoint_file, framework="pt") as f:
FileNotFoundError: No such file or directory: "/home/HwHiAiUser/models/chatglm3-6b/model-00001-of-00007.safetensors"
/home/anaconda3/envs/sakura/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp1ygjyx3i'>
  _warnings.warn(warn_message, ResourceWarning)

添加代码

找到ChatGLM3/basic_demo/cli_demo.py

添加以下代码:

python 复制代码
import torch
import torch_npu
import torchvision 
import torchvision_npu
from torch_npu.contrib import transfer_to_npu
import os
import platform
import time
torch_device = "npu:3" # 0~7
torch.npu.set_device(torch.device(torch_device))
torch.npu.set_compile_mode(jit_compile=False)
option = {}
option["NPU_FUZZY_COMPILE_BLACKLIST"] = "Tril"
torch.npu.set_option(option)
print("torch && torch_npu import successfully")

模型加载部分修改为:

python 复制代码
model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True).npu().eval()

结果展示

相关推荐
舒一笑26 分钟前
如何获取最新的技术趋势和热门技术
人工智能·程序员
聚客AI42 分钟前
🎉OpenClaw深度解析:多智能体协同的三种模式、四大必装技能与自动化运维秘籍
人工智能·开源·agent
黄粱梦醒1 小时前
大模型企业级部署方案-vllm
人工智能·llm
IT_陈寒1 小时前
JavaScript代码效率提升50%?这5个优化技巧你必须知道!
前端·人工智能·后端
IT_陈寒1 小时前
Java开发必知的5个性能优化黑科技,提升50%效率不是梦!
前端·人工智能·后端
康斯坦丁师傅1 小时前
发现一个插件,免费用谷歌最新NanoBanana 2
人工智能
emo猫pro_max3 小时前
openclaw飞书流式回复配置指南
人工智能
FishCoderh3 小时前
被OpenClaw的Session搞晕了?这篇让你彻底搞懂
人工智能
孤烟4 小时前
19 万 + GitHub 星标!OpenClaw 凭什么成为 2026 最火 AI Agent,万字实测告诉你
人工智能
zhl774 小时前
YOLOv5:从0搭建你的第一个目标检测模型
人工智能