MI50运行GLM-4.7-Flash的速度测试

模型版本:https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-UD-Q4_K_XL.gguf

llama.cpp版本:b7933

复制代码
root@dev:~# llama-bench -m glm-4.7-flash.gguf 
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx906:sramecc-:xnack- (0x906), VMM: no, Wave Size: 64
| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| deepseek2 30B.A3B Q4_K - Medium |  16.31 GiB |    29.94 B | ROCm       |  99 |           pp512 |        918.25 ± 1.37 |
| deepseek2 30B.A3B Q4_K - Medium |  16.31 GiB |    29.94 B | ROCm       |  99 |           tg128 |         54.62 ± 0.12 |

实际使用体验:

复制代码
srv  params_from_: Chat format: GLM 4.5
slot get_availabl: id  1 | task -1 | selected slot by LCP similarity, sim_best = 1.000 (> 0.100 thold), f_keep = 1.000
slot launch_slot_: id  1 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> min-p -> ?xtc -> ?temp-ext -> dist 
slot launch_slot_: id  1 | task 12639 | processing task, is_child = 0
slot update_slots: id  1 | task 12639 | new prompt, n_ctx_slot = 32768, n_keep = 0, task.n_tokens = 32375
slot update_slots: id  1 | task 12639 | n_tokens = 32366, memory_seq_rm [32366, end)
slot update_slots: id  1 | task 12639 | prompt processing progress, n_tokens = 32375, batch.n_tokens = 9, progress = 1.000000
slot update_slots: id  1 | task 12639 | prompt done, n_tokens = 32375, batch.n_tokens = 9
slot init_sampler: id  1 | task 12639 | init sampler, took 4.34 ms, tokens: text = 32375, total = 32375
slot print_timing: id  1 | task 12639 | 
prompt eval time =     176.24 ms /     9 tokens (   19.58 ms per token,    51.07 tokens per second)
       eval time =    1334.39 ms /    46 tokens (   29.01 ms per token,    34.47 tokens per second)
      total time =    1510.63 ms /    55 tokens

上下文32k时,提示词解码速度:51.07 tokens per second, 生成速度:34.47 tokens per second

相关推荐
容沁风6 小时前
openclaw使用本地llama.cpp
llama.cpp·qwen3·openclaw
love530love2 天前
Windows 11 配置 CUDA 版 llama.cpp 并实现系统全局调用(GGUF 模型本地快速聊天)
人工智能·windows·大模型·llama·llama.cpp·gguf·cuda 加速
诸神缄默不语7 天前
如何用Python调用智谱清言GLM系API实现智能问答
python·ai·大模型·nlp·chatglm·glm·智谱清言
leida_wt1 个月前
新版llama.cpp在win7系统的移植与编译
编译·llama.cpp·win7
视图猿人1 个月前
使用LLama.cpp本地部署大模型
llama.cpp
熊猫钓鱼>_>1 个月前
GLM4.6多工具协同开发实践:AI构建智能任务管理系统的完整指南
人工智能·python·状态模式·ai编程·glm·分类系统·开发架构
AMiner:AI科研助手1 个月前
AI如何重新定义研究?以AMiner沉思为例讲透Deep Research
人工智能·glm·智谱·深度调研
坐吃山猪2 个月前
AutoGLMPhone06-源码-模型替换
llm·glm·phone
坐吃山猪2 个月前
AutoGLMPhone03-adb模块
adb·llm·glm