大语言模型:Linux系统下源码编译Ollama指南

为了进一步学习Ollama底层机制,方便代码分析、问题调试,本文将简单介绍在Linux环境下源码编译Ollama的全流程,整个编译过程在没有GPU的设备上运行,只编译纯CPU的Ollama版本。

必备工具

编译Ollama需要一些基础的开发工具和依赖库

js 复制代码
$ go version
go version go1.23.4 linux/amd64
$ git --version
git version 2.25.1
$ make --version
GNU Make 4.2.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ gcc --version
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

编译源码

js 复制代码
# 克隆源码,拉取标签为v0.5.7的版本
$ git clone -b v0.5.7 https://github.com/ollama/ollama.git ollama-0.5.7
Cloning into 'ollama-0.5.7'...
remote: Enumerating objects: 30629, done.
remote: Counting objects: 100% (163/163), done.
remote: Compressing objects: 100% (106/106), done.
remote: Total 30629 (delta 111), reused 57 (delta 57), pack-reused 30466 (from 4)
Receiving objects: 100% (30629/30629), 33.54 MiB | 1.79 MiB/s, done.
Resolving deltas: 100% (19453/19453), done.
Note: switching to 'a420a453b4783841e3e79c248ef0fe9548df6914'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false
$ cd ollama-0.5.7
$ go mod tidy
$ export CGO_ENABLED=1
$ make help
The following make targets will help you build Ollama

    make all           # (default target) Build Ollama llm subprocess runners, and the primary ollama executable
    make runners        # Build Ollama llm subprocess runners; after you may use 'go build .' to build the primary ollama exectuable
    make <runner>        # Build specific runners. Enabled: 'cpu'
    make dist        # Build the runners and primary ollama executable for distribution
    make help-sync         # Help information on vendor update targets
    make help-runners     # Help information on runner targets

The following make targets will help you test Ollama

    make test           # Run unit tests
    make integration    # Run integration tests.  You must 'make all' first
    make lint           # Run lint and style tests

For more information see 'docs/development.md'
# 执行编译
$ make -j 5
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath  -o ollama .
# 仅编译主程序(ollama)
$ go build .

编译完成后将生成运行器(ollama_llama_server)和主程序(ollama)

js 复制代码
$ find . -type f -executable | grep -v ".sh" | grep -v ".sample"
./ollama
./llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server
./llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server

运行测试

js 复制代码
$ ./ollama --help
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
# 启动服务
$ ./ollama serve > ollama.log 2>&1 &
# 运行模型
$ ./ollama run deepseek-r1:1.5b "你是谁?"
<think>

</think>

您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
相关推荐
福大大架构师每日一题2 小时前
ollama v0.22.1 重大更新全解析:新增Poolside集成、模型推荐机制与多架构适配
架构·ollama
不懒不懒7 小时前
【从零入门本地大模型:Ollama 安装部署 + Qwen2.5 实现零样本情感分类】
人工智能·分类·数据挖掘·大模型·ollama
SkySeraph9 小时前
SkillNexus:开源 Skills 全生命周期创造平台
llm·agent·skill·skillnexus
wj3055853789 小时前
Ollama Cloud 与直接使用 API 的对比
llm·llama
无糖可乐没有灵魂14 小时前
AI Agent结构图例和工作流程描述
ai·llm·prompt·agent·mcp·skills
冬奇Lab1 天前
RAG 系列(四):文档处理——从原始文件到高质量 Chunk
人工智能·llm·源码
Clark111 天前
十年 C++ 后端 GAP 六个月,写了一个近 3 万行的LLM-TFFInfer推理框架项目解析(三)-模型加载
llm
Cosolar1 天前
封神级 TTS!VoxCPM2 凭连续表征,玩转多语言合成 + 创意音色 + 无损声纹克隆
人工智能·llm·github
岛雨QA1 天前
🎉Token自由-Ollama部署本地大模型超详细操作指南
人工智能·llm·ollama
SkySeraph1 天前
大模型套餐深度分析:国内外主流平台全景对比
llm