使用gpu构建llama.cpp
更多详情参见https://github.com/abetlen/llama-cpp-python,官网网站会随着版本迭代更新。
下载并进入llama.cpp
地址:https://github.com/ggerganov/llama.cpp
可以下载到本地再传到服务器上
bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
编译源码(make)
生成./main和./quantize等二进制文件。详见:https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md
使用CPU
bash
make
使用GPU
bash
make GGML_CUDA=1
可能出现的报错及解决方法
I ccache not found. Consider installing it for faster compilation.
bash
sudo apt-get install ccache
Makefile:1002: *** I ERROR: For CUDA versions < 11.7 a target CUDA architecture must be explicitly provided via environment variable CUDA_DOCKER_ARCH, e.g. by running "export CUDA_DOCKER_ARCH=compute_XX" on Unix-like systems, where XX is the minimum compute capability that the code needs to run on. A list with compute capabilities can be found here: https://developer.nvidia.com/cuda-gpus . Stop.
说明cuda版本太低,如果不是自己下载好的,参考该文章nvcc -V 显示的cuda版本和实际版本不一致更换
NOTICE: The 'server' binary is deprecated. Please use 'llama-server' instead.
提示:随版本迭代,命令可能会失效
正确结果
内容很长,只截取了一部分
调用大模型
安装llama.cpp,比较慢
bash
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
调用
python
from langchain_community.chat_models import ChatLlamaCpp
from langchain_community.llms import LlamaCpp
local_model = "/data/pretrained/gguf/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf"
llm = ChatLlamaCpp(
seed=1,
temperature=0.5,
model_path=local_model,
n_ctx=8192,
n_gpu_layers=64,
n_batch=12, # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
max_tokens=8192,
repeat_penalty=1.5,
top_p=0.5,
f16_kv=False,
verbose=True,
)
messages = [
(
"system",
"You are a helpful assistant that translates English to Chinese. Translate the user sentence.",
),
("human",
"OpenAI has a tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally."),
]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
正确打印中存在如下内容,说明找到gpu
ggml_cuda_init: found 2 CUDA devices:
Device 0: 你的gpu型号, compute capability gpu计算能力, VMM: yes
Device 1: 你的gpu型号, compute capability gpu计算能力, VMM: yes
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: CPU buffer size = 344.44 MiB
llm_load_tensors: CUDA0 buffer size = 2932.34 MiB
llm_load_tensors: CUDA1 buffer size = 2183.15 MiB