【BUG】Error: llama runner process has terminated: exit status 127

本地私有化部署大模型,主流的工具是 Ollama

使用以下指令部署:

bash 复制代码
curl -fsSL https://ollama.com/install.sh | sh

但是笔者本地报错了,查下gitbub 手动下载:

bash 复制代码
curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz

或者使用[三方镜像](https://docker.aityp.com/image/docker.io/ollama/ollama:rocm) rocm 是支持GPU的意思。

bash 复制代码
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/ollama/ollama:rocm
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/ollama/ollama:rocm  docker.io/ollama/ollama:rocm

安装完毕执行:

bash 复制代码
sh-4.2# ollama run glm4
Error: llama runner process has terminated: exit status 127

由于日志比较少,打开docker的日志分析:

bash 复制代码
[GIN] 2024/11/15 - 07:41:49 | 200 |     283.749µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 07:41:49 | 200 |   17.617406ms |       127.0.0.1 | POST     "/api/show"
time=2024-11-15T07:41:49.430Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449 gpu=GPU-c0da87a9-e0be-be71-6ee5-496aa7f0d6d0 parallel=4 available=9513730048 required="6.2 GiB"
time=2024-11-15T07:41:49.431Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[8.9 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="320.0 MiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.6 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="485.6 MiB" memory.graph.full="561.0 MiB" memory.graph.partial="789.6 MiB"
time=2024-11-15T07:41:49.445Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama3667284545/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --parallel 4 --port 33511"
time=2024-11-15T07:41:49.445Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-11-15T07:41:49.445Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-11-15T07:41:49.446Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
/tmp/ollama3667284545/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcudart.so.12: cannot open shared object file: No such file or directory
time=2024-11-15T07:41:49.698Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
[GIN] 2024/11/15 - 07:41:49 | 500 |  654.604793ms |       127.0.0.1 | POST     "/api/chat"
time=2024-11-15T07:41:54.826Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.128305852 model=/root/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449
time=2024-11-15T07:41:55.077Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.378774395 model=/root/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449
time=2024-11-15T07:41:55.326Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.628515531 model=/root/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449

关键问题日志:
/tmp/ollama3667284545/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcudart.so.12: cannot open shared object file: No such file or directory

初步分析是缺少cuda的驱动,版本是V12。到英伟达官网(需要注册)查到对应的驱动,因为rocm 底层操作系统镜像是centos,所有高版本的工具都没有了。下载地址

执行以下指令:

bash 复制代码
wget https://developer.download.nvidia.com/compute/cuda/12.0.0/local_installers/cuda_12.0.0_525.60.13_linux.run
sudo sh cuda_12.0.0_525.60.13_linux.run

出现的终端,输入accpet,然后下一个界面出现的"X"按Enter键取消,选择"Install"进行安装。

然后设置环境变量:

bash 复制代码
vim  ~/.bashrc
bash 复制代码
export PATH=$PATH:/usr/local/cuda-12.0/bin/
bash 复制代码
source  ~/.bashrc

验证

bash 复制代码
[root@039e8aa5e3fc /]# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Mon_Oct_24_19:12:58_PDT_2022
Cuda compilation tools, release 12.0, V12.0.76
Build cuda_12.0.r12.0/compiler.31968024_0

执行完毕再次运行模型:

bash 复制代码
[root@039e8aa5e3fc /]# ollama run glm4
>>> hello
Hello 👋! How can I assist you today?

附件

Centos 替换系统镜像

在CentOS中,如果你想替换系统镜像源,通常是为了快速下载或更新系统软件包。以下是一个简单的步骤指南和示例代码,用于替换默认的CentOS镜像源为阿里云镜像源。

1.备份当前的yum仓库配置。

bash 复制代码
sudo cp -a /etc/yum.repos.d /etc/yum.repos.d.backup

2.清除原有的yum仓库配置。

bash 复制代码
sudo rm -f /etc/yum.repos.d/*.repo

3.下载阿里云的CentOS镜像源配置文件。

bash 复制代码
sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

4.除yum缓存并生成新的缓存。

bash 复制代码
sudo yum clean allsudo yum makecache

5.更新系统(可选)。

bash 复制代码
sudo yum update

以上步骤将会将你的CentOS系统的yum镜像源更换为阿里云的镜像源,并生成新的缓存文件,以便于快速下载和安装软件包。如果你使用的是CentOS 8或其他版本,请确保下载对应版本的镜像源配置文件。

相关推荐
自学也学好编程1 小时前
【BUG】nvm无法安装低版本Node.js:The system cannot find the file specified解决方案
node.js·bug
大模型教程2 天前
大模型本地部署:手把手带你在Mac本地部署运行AI大模型
程序员·llm·ollama
WBingJ2 天前
记录一次薛定谔bug
python·opencv·bug
普宁彭于晏3 天前
uni-app switch(开关选择器) BUG
uni-app·bug
!chen5 天前
Oracle 19.20未知BUG导致oraagent进程内存泄漏
数据库·oracle·bug
liliangcsdn6 天前
mac测试ollama llamaindex
数据仓库·人工智能·prompt·llama
茫茫人海一粒沙6 天前
使用 LLaMA 3 8B 微调一个 Reward Model:从入门到实践
llama
陈佬昔没带相机7 天前
Dify MCP功能实测,小参数模型竟然全军覆没!
ollama·deepseek·mcp
SAP龙哥7 天前
SAP在未启用负库存的情况下,库存却出现了负数-补充S4 1709 BUG
运维·bug
lxmyzzs7 天前
【已解决】YOLO11模型转wts时报错:PytorchStreamReader failed reading zip archive
人工智能·python·深度学习·神经网络·目标检测·计算机视觉·bug