大语言模型:Linux系统下源码编译Ollama指南

为了进一步学习Ollama底层机制,方便代码分析、问题调试,本文将简单介绍在Linux环境下源码编译Ollama的全流程,整个编译过程在没有GPU的设备上运行,只编译纯CPU的Ollama版本。

必备工具

编译Ollama需要一些基础的开发工具和依赖库

js 复制代码
$ go version
go version go1.23.4 linux/amd64
$ git --version
git version 2.25.1
$ make --version
GNU Make 4.2.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ gcc --version
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

编译源码

js 复制代码
# 克隆源码,拉取标签为v0.5.7的版本
$ git clone -b v0.5.7 https://github.com/ollama/ollama.git ollama-0.5.7
Cloning into 'ollama-0.5.7'...
remote: Enumerating objects: 30629, done.
remote: Counting objects: 100% (163/163), done.
remote: Compressing objects: 100% (106/106), done.
remote: Total 30629 (delta 111), reused 57 (delta 57), pack-reused 30466 (from 4)
Receiving objects: 100% (30629/30629), 33.54 MiB | 1.79 MiB/s, done.
Resolving deltas: 100% (19453/19453), done.
Note: switching to 'a420a453b4783841e3e79c248ef0fe9548df6914'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false
$ cd ollama-0.5.7
$ go mod tidy
$ export CGO_ENABLED=1
$ make help
The following make targets will help you build Ollama

    make all           # (default target) Build Ollama llm subprocess runners, and the primary ollama executable
    make runners        # Build Ollama llm subprocess runners; after you may use 'go build .' to build the primary ollama exectuable
    make <runner>        # Build specific runners. Enabled: 'cpu'
    make dist        # Build the runners and primary ollama executable for distribution
    make help-sync         # Help information on vendor update targets
    make help-runners     # Help information on runner targets

The following make targets will help you test Ollama

    make test           # Run unit tests
    make integration    # Run integration tests.  You must 'make all' first
    make lint           # Run lint and style tests

For more information see 'docs/development.md'
# 执行编译
$ make -j 5
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\"  " -trimpath  -o ollama .
# 仅编译主程序(ollama)
$ go build .

编译完成后将生成运行器(ollama_llama_server)和主程序(ollama)

js 复制代码
$ find . -type f -executable | grep -v ".sh" | grep -v ".sample"
./ollama
./llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server
./llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server

运行测试

js 复制代码
$ ./ollama --help
Large language model runner

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
# 启动服务
$ ./ollama serve > ollama.log 2>&1 &
# 运行模型
$ ./ollama run deepseek-r1:1.5b "你是谁?"
<think>

</think>

您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
相关推荐
量子位1 天前
18岁女孩做养老机器人,上线2天卖爆了
人工智能·llm
大模型教程1 天前
12天带你速通大模型基础应用(二)自动化调优Prompt
程序员·llm·agent
AI大模型1 天前
无所不能的Embedding(02) - 词向量三巨头之FastText详解
程序员·llm·agent
AI大模型1 天前
无所不能的Embedding(03) - word2vec->Doc2vec[PV-DM/PV-DBOW]
程序员·llm·agent
聚客AI1 天前
🌸万字解析:大规模语言模型(LLM)推理中的Prefill与Decode分离方案
人工智能·llm·掘金·日新计划
最菜灰夫人1 天前
在Ubuntu 20.04上安装Ollama并部署大型语言模型:含Open WebUI图形界面教程
ollama
大千AI助手1 天前
VeRL:强化学习与大模型训练的高效融合框架
人工智能·深度学习·神经网络·llm·强化学习·verl·字节跳动seed
SEO_juper2 天前
大型语言模型SEO(LLM SEO)完全手册:驾驭搜索新范式
人工智能·语言模型·自然语言处理·chatgpt·llm·seo·数字营销
堆栈future2 天前
我的个人网站上线了,AI再一次让我站起来了
程序员·llm·aigc
大模型教程2 天前
AI Agent 发展趋势与架构演进
程序员·llm·agent