llama.cpp Build Instructions

llama.cpp Build Instructions

Overview

This document provides instructions for building llama.cpp from source code on Linux systems.

Prerequisites

  • GCC with C++17 support and std::filesystem support (GCC 9+ recommended)
  • CMake 3.15 or higher
  • Git
  • Make or Ninja build system
  • OpenMP support (usually included with GCC)

Quick Start

1. Clone the Repository

bash 复制代码
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp

2. Build with Default Options

bash 复制代码
# Configure build
cmake -B build

# Build the main binary and libraries
cmake --build build --config Release -j$(nproc)

3. Verify Build

bash 复制代码
# Test the main binary
./build/bin/llama-cli --help

# Check binary info
file build/bin/llama-cli

Advanced Build Options

Build Configuration Options

bash 复制代码
cmake -B build [OPTIONS]

Common options:

  • -DCMAKE_BUILD_TYPE=Release - Optimized release build (default)
  • -DCMAKE_BUILD_TYPE=Debug - Debug build with symbols
  • -DBUILD_SHARED_LIBS=ON - Build shared libraries instead of static
  • -DLLAMA_CURL=OFF - Disable HTTP download support
  • -DCMAKE_CXX_STANDARD=17 - Specify C++ standard

CPU Optimizations

bash 复制代码
# Build with OpenBLAS support
cmake -B build -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS

# Build with Intel oneMKL
cmake -B build -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=Intel

Building Specific Targets

bash 复制代码
# Build only the main CLI tool
cmake --build build --config Release --target llama-cli -j$(nproc)

# Build simple example
cmake --build build --config Release --target llama-simple -j$(nproc)

# Build server
cmake --build build --config Release --target llama-server -j$(nproc)

# Build all targets
cmake --build build --config Release -j$(nproc)

Build Artifacts

Main Binaries

  • build/bin/llama-cli - Main command-line interface
  • build/bin/llama-server - HTTP server for model inference
  • build/bin/llama-simple - Simple example application

Libraries

  • build/bin/libllama.a - Static llama library
  • build/bin/libggml.a - Core GGML static library
  • build/bin/libggml-cpu.a - CPU backend static library

CLI Tools

  • build/bin/llama-gguf - GGUF file manipulation
  • build/bin/llama-gguf-hash - GGUF file hashing
  • build/bin/llama-gemma3-cli - Gemma3-specific CLI
  • build/bin/llama-llava-cli - LLaVA multimodal CLI
  • build/bin/llama-minicpmv-cli - MiniCPM-V CLI
  • build/bin/llama-qwen2vl-cli - Qwen2VL CLI

Troubleshooting

GCC Version Issues

Problem: Build fails with std::filesystem linking errors Solution: Use a newer GCC version with proper filesystem support

Option 1: Use GCC Toolset (RHEL/CentOS/UnionTechOS)
bash 复制代码
# Check available toolsets
dnf list gcc-toolset-*

# Install and enable GCC 12
scl enable gcc-toolset-12 bash

# Or use in build command directly
source scl_source enable gcc-toolset-12
cmake -B build
cmake --build build --config Release -j$(nproc)
Option 2: Install GCC 9+ manually
bash 复制代码
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install gcc-11 g++-11

# Set as default
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 100
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 100
Option 3: Add explicit filesystem linking
bash 复制代码
cmake -B build -DCMAKE_EXE_LINKER_FLAGS="-lstdc++fs"

Common Build Errors

Error: ccache not found Solution: Install ccache or disable warning

bash 复制代码
# Install ccache (recommended for faster builds)
sudo apt-get install ccache  # Ubuntu/Debian
sudo dnf install ccache        # RHEL/Fedora

# Or disable warning
cmake -B build -DGGML_CCACHE=OFF

Error: Missing OpenMP Solution: Install OpenMP development packages

bash 复制代码
# Ubuntu/Debian
sudo apt-get install libomp-dev

# RHEL/Fedora
sudo dnf install libgomp-devel

Error: CMAKE minimum version required Solution: Upgrade CMake

bash 复制代码
# Ubuntu/Debian
sudo apt-get install cmake

# RHEL/Fedora
sudo dnf install cmake

Build Performance Tips

Parallel Compilation

bash 复制代码
# Use all CPU cores
cmake --build build --config Release -j$(nproc)

# Use specific number of cores
cmake --build build --config Release -j8

Using Ninja Generator

bash 复制代码
# Install Ninja
sudo apt-get install ninja-build  # Ubuntu/Debian
sudo dnf install ninja-build       # RHEL/Fedora

# Build with Ninja
cmake -B build -G Ninja
cmake --build build --config Release

Using CCache

bash 复制代码
# Install ccache
sudo apt-get install ccache  # Ubuntu/Debian
sudo dnf install ccache        # RHEL/Fedora

# Configure CMake to use ccache
export CC="ccache gcc"
export CXX="ccache g++"
cmake -B build

Production Builds

For production deployments, use these recommended options:

bash 复制代码
cmake -B build \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_SHARED_LIBS=OFF \
    -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON \
    -DGGML_NATIVE=ON \
    -DLLAMA_CURL=OFF

cmake --build build --config Release -j$(nproc)

Verification

After building, verify the installation:

bash 复制代码
# Check version
./build/bin/llama-cli --version

# Test help
./build/bin/llama-cli --help

# Verify binary type
file build/bin/llama-cli

# Check dependencies (if shared libs)
ldd build/bin/llama-cli

Alternative Build Methods

Using Package Managers

Brew (macOS):

bash 复制代码
brew install llama.cpp

Nix:

bash 复制代码
nix-shell -p llama.cpp

Winget (Windows):

powershell 复制代码
winget install ggml.llama.cpp

Using Pre-built Binaries

Download from the GitHub Releases page for your platform.

Usage Example

After successful build:

bash 复制代码
# Download a model (example)
wget https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf

# Run inference
./build/bin/llama-cli -m llama-2-7b-chat.Q4_K_M.gguf -p "Hello, how are you?" -n 50

Additional Resources

相关推荐
荒诞硬汉3 分钟前
数组常见算法
java·数据结构·算法
少许极端4 分钟前
算法奇妙屋(二十四)-二维费用的背包问题、似包非包问题、卡特兰数问题(动态规划)
算法·动态规划·卡特兰数·二维费用背包·似包非包
Z1Jxxx8 分钟前
日期日期日期
开发语言·c++·算法
万行14 分钟前
机器学习&第五章生成式生成器
人工智能·python·算法·机器学习
罗湖老棍子17 分钟前
【模板】并查集(洛谷P3367)
算法·图论·并查集
_OP_CHEN23 分钟前
【算法基础篇】(四十五)裴蜀定理与扩展欧几里得算法:从不定方程到数论万能钥匙
算法·蓝桥杯·数论·算法竞赛·裴蜀定理·扩展欧几里得算法·acm/icpc
shangjian00733 分钟前
AI大模型-机器学习-算法-线性回归
人工智能·算法·机器学习
mjhcsp1 小时前
C++ KMP 算法:原理、实现与应用全解析
java·c++·算法·kmp
lizhongxuan1 小时前
Manus: 上下文工程的最佳实践
算法·架构
CS创新实验室1 小时前
《计算机网络》深入学:海明距离与海明码
计算机网络·算法·海明距离·海明编码