构建llama.cpp并在linux上使用gpu

使用gpu构建llama.cpp

更多详情参见https://github.com/abetlen/llama-cpp-python,官网网站会随着版本迭代更新。

下载并进入llama.cpp

地址:https://github.com/ggerganov/llama.cpp

可以下载到本地再传到服务器上

bash 复制代码
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

编译源码(make)

生成./main和./quantize等二进制文件。详见:https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md

使用CPU
bash 复制代码
make
使用GPU
bash 复制代码
make GGML_CUDA=1
可能出现的报错及解决方法

I ccache not found. Consider installing it for faster compilation.

bash 复制代码
sudo apt-get install ccache

Makefile:1002: *** I ERROR: For CUDA versions < 11.7 a target CUDA architecture must be explicitly provided via environment variable CUDA_DOCKER_ARCH, e.g. by running "export CUDA_DOCKER_ARCH=compute_XX" on Unix-like systems, where XX is the minimum compute capability that the code needs to run on. A list with compute capabilities can be found here: https://developer.nvidia.com/cuda-gpus . Stop.

说明cuda版本太低,如果不是自己下载好的,参考该文章nvcc -V 显示的cuda版本和实际版本不一致更换
NOTICE: The 'server' binary is deprecated. Please use 'llama-server' instead.

提示:随版本迭代,命令可能会失效

正确结果

内容很长,只截取了一部分

调用大模型

安装llama.cpp,比较慢

bash 复制代码
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

调用

python 复制代码
from langchain_community.chat_models import ChatLlamaCpp
from langchain_community.llms import LlamaCpp

local_model = "/data/pretrained/gguf/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf"
llm = ChatLlamaCpp(
    seed=1,
    temperature=0.5,
    model_path=local_model,
    n_ctx=8192,
    n_gpu_layers=64,
    n_batch=12,  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
    max_tokens=8192,
    repeat_penalty=1.5,
    top_p=0.5,
    f16_kv=False,
    verbose=True,
)
messages = [
    (
        "system",
        "You are a helpful assistant that translates English to Chinese. Translate the user sentence.",
    ),
    ("human",
     "OpenAI has a tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally."),
]

ai_msg = llm.invoke(messages)
print(ai_msg.content)

正确打印中存在如下内容,说明找到gpu

复制代码
ggml_cuda_init: found 2 CUDA devices:
  Device 0: 你的gpu型号, compute capability gpu计算能力, VMM: yes
  Device 1: 你的gpu型号, compute capability gpu计算能力, VMM: yes
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors:        CPU buffer size =   344.44 MiB
llm_load_tensors:      CUDA0 buffer size =  2932.34 MiB
llm_load_tensors:      CUDA1 buffer size =  2183.15 MiB
相关推荐
晴栀ay12 分钟前
AI TO SQL:AIGC时代数据库操作的革命性变革
数据库·llm·aigc
Baihai_IDP33 分钟前
用户体验与商业化的两难:Chatbots 的广告承载困境分析
人工智能·面试·llm
带刺的坐椅1 小时前
Solon AI 开发学习13 - chat - Tool的输入输出架构及生成类
ai·chatgpt·llm·solon·mcp
EdisonZhou2 小时前
MAF快速入门(5)开发自定义Executor
llm·aigc·agent·.net core
智泊AI13 小时前
震惊!Open AI把Transformer训练成了“几乎全部归零”!
llm
大模型教程16 小时前
开源大模型不求人!一文带你全面入门《开源大模型食用指南》
程序员·llm·agent
大模型教程16 小时前
从 0 到 1,微调一个自己专属的大模型
程序员·llm·agent
AI大模型17 小时前
最好用的开源AI智能体(Agent)开发框架对比:LangChain-AutoGen-LlamaIndex等
langchain·llm·agent
AI大模型17 小时前
刚入门AI大模型?这6个GitHub开源教程,连微软都忍不住推荐
程序员·llm·agent
亚里随笔18 小时前
MiniRL:用LLM稳定强化学习的新范式与第一阶近似理论
人工智能·深度学习·机器学习·llm·rlhf·agentic