先上结论,ollama的rocm版本应该是支持amd的,而不是国产DCU的。
下载
先下载
wget -c ollama-linux-amd64-rocm.tar.zst
ollama的rocm版本,后缀是zst,请问怎么解开和安装?ollama-linux-amd64-rocm.tar.zst
大约跟普通gz后缀一样?
直接tar xvf失败。
安装zstd
apt update
apt install zstd
解压
tar -I 'zstd' -xf ollama-linux-amd64-rocm.tar.zst -C /
解压到
标准步骤
先安装普通linux cpu版本的ollama
下载 解压
wget -c https://github.com/ollama/ollama/releases/download/v0.15.2/ollama-linux-amd64.tar.zst
解压
tar -I 'zstd' -xf ollama-linux-amd64.tar.zst -C /usr
安装dcu rocm ollama插件
下载
解压
tar -I 'zstd' -xf ollama-linux-amd64-rocm.tar.zst -C /
下载模型
ollama pull lfm2.5-thinking:1.2b
启动模型
ollama run lfm2.5-thinking:1.2b
速度很慢
推理速度很慢
root@notebook-2015948306967752706-denglf-89748:~# ollama run lfm2.5-thinking:1.2b
>>> 天为什么是蓝色的?
Thinking...
Okay, let's tackle this question: "Why is the sun (or Earth?) blue?" Hmm, the user is asking why something appears
看看ollama serve的跟踪信息
这是纯cpu推理啊
time=2026-01-27T02:26:23.516Z level=INFO source=server.go:245 msg="enabling flash attention"
time=2026-01-27T02:26:23.517Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-a7b19185f31650af480c3fa28dd240c75862182e0f30b118dbfadb192e4beb0a --port 33665"
time=2026-01-27T02:26:23.517Z level=INFO source=sched.go:452 msg="system memory" total="1007.4 GiB" free="896.3 GiB" free_swap="0 B"
time=2026-01-27T02:26:23.517Z level=INFO source=server.go:755 msg="loading model" "model layers"=17 requested=-1
time=2026-01-27T02:26:23.549Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-27T02:26:23.550Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:33665"
time=2026-01-27T02:26:23.561Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:127 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T02:26:23.584Z level=INFO source=ggml.go:136 msg="" architecture=lfm2 file_type=Q4_K_M name="" description="" num_tensors=148 num_key_values=28
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2026-01-27T02:26:23.592Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-27T02:26:23.614Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:127 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T02:26:23.684Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:127 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T02:26:23.685Z level=INFO source=device.go:245 msg="model weights" device=CPU size="799.8 MiB"
time=2026-01-27T02:26:23.685Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="48.1 MiB"
time=2026-01-27T02:26:23.685Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="64.0 MiB"
time=2026-01-27T02:26:23.684Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2026-01-27T02:26:23.685Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-01-27T02:26:23.685Z level=INFO source=ggml.go:494 msg="offloaded 0/17 layers to GPU"
time=2026-01-27T02:26:23.685Z level=INFO source=device.go:272 msg="total memory" size="911.9 MiB"
time=2026-01-27T02:26:23.685Z level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-27T02:26:23.685Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-27T02:26:23.690Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
time=2026-01-27T02:26:24.192Z level=INFO source=server.go:1385 msg="llama runner started in 0.67 seconds"
[GIN] 2026/01/27 - 02:26:24 | 200 | 864.235109ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2026/01/27 - 02:28:52 | 200 | 55.662835879s | 127.0.0.1 | POST "/api/chat"
用ps 看一下,果然在cpu推理的
ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
lfm2.5-thinking:1.2b 95bd9d45385f 956 MB 100% CPU 4096 3 minutes from now
咨询一下:dcu计算卡可以用ollama的rocm进行推理吗?
根据现有信息,目前Ollama 并未提供专门针对 DCU(国产异构加速卡)的 ROCm 版本。虽然 Ollama 支持通过 ROCm 后端在 AMD GPU 上运行模型,但 DCU 属于国产芯片,其软件栈(如 DTK)与 NVIDIA CUDA 或 AMD ROCm 不同。
所以,应该是无法使用了...