Ollama是一个基于Go和C/C++混合开发的本地大模型运行框架,专注于简化大语言模型的本地化部署和运行,让开发者能够低成本探索大语言模型的能力。本文将结合Ollama v0.5.7源码简单介绍Ollama的实现原理,Ollama使用说明详见《Ollama大语言模型运行器》,源码编译详见《Linux系统下源码编译Ollama指南》。
js
# 基于纯CPU的Ollama v0.5.7版本编译完成后将生成主程序(ollama)和运行器(ollama_llama_server)
root@ubuntu:/download/ollama-0.5.7# find . -type f -executable | grep -v ".sh" | grep -v ".sample"
./ollama
./llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server
./llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server
系统架构
Ollama运行模式采用C/S(Client/Server)架构,客户端和服务器使用同一个可执行文件ollama。serve命令用于启动服务器,其他命令通过HTTP协议与服务器进行通信,通知服务器执行模型拉取、推送、运行等操作。
- 客户端
命令行程序
集成客户端:详见Community Integrations。
可执行程序为ollama,逻辑实现为cmd\cmd.go,程序入口为main.go
- 服务器
(1)模型管理层(HTTP服务器)
模型管理:模型文件的下载、上传、创建
资源分配:GPU优先于CPU
访问控制
反向代理
可执行程序为ollama,实际通过ollama serve启动,逻辑实现为server\routes.go
(2)模型服务层(HTTP服务器)
模型加载
模型推理:文本生成,Embedding计算
可执行文件为ollama或ollama_llama_server,实际通过ollama runner或ollama_llama_server runner启动,逻辑实现为llama\runner\runner.go和llama\llama.go,ollama_llama_server程序入口为cmd\runner\main.go
交互流程
启动服务
命令
ollama serve命令用于启动服务器,默认监听http://127.0.0.1:11434,通过环境变量OLLAMA_HOST可以设置监听地址,格式为<scheme>://<host>:<port>
,scheme为http或https。
js
root@ubuntu:/download/ollama-0.5.7# OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=1 ./ollama serve
2025/03/03 20:09:29 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-03T20:09:29.858+08:00 level=INFO source=images.go:432 msg="total blobs: 11"
time=2025-03-03T20:09:29.858+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-03-03T20:09:29.859+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45)"
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/download/ollama-0.5.7/llama/build/linux-amd64/runners
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/download/ollama-0.5.7/llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/download/ollama-0.5.7/llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server
time=2025-03-03T20:09:29.859+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-03-03T20:09:29.859+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-03T20:09:29.859+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/download/ollama-0.5.7/llama/build/linux-amd64/libcuda.so* /download/ollama-0.5.7/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-03T20:09:29.861+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[]
time=2025-03-03T20:09:29.861+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcudart.so*
time=2025-03-03T20:09:29.862+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/download/ollama-0.5.7/llama/build/linux-amd64/libcudart.so* /download/ollama-0.5.7/libcudart.so* /download/ollama-0.5.7/llama/build/linux-amd64/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2025-03-03T20:09:29.862+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[]
time=2025-03-03T20:09:29.863+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2025-03-03T20:09:29.863+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-03-03T20:09:29.863+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.2 GiB"
从另外一个终端上查看进程和端口号
js
root@ubuntu:/download# netstat -antp | grep 11434
tcp6 0 0 :::11434 :::* LISTEN 2745295/./ollama
root@ubuntu:/download# ps -ef | grep -v grep | grep 2745295
root 2745295 2407821 0 20:09 pts/1 00:00:00 ./ollama serve
从另外一台Windows电脑上查看版本号
js
c:\>set OLLAMA_HOST=192.168.23.209:11434
c:\>ollama -v
ollama version is 0.5.7-0-ga420a45
Warning: client version is 0.5.7
流程
js
serveCmd := &cobra.Command{
Use: "serve",
Aliases: []string{"start"},
Short: "Start ollama",
Args: cobra.ExactArgs(0),
RunE: RunServer,
}
- 初始化密钥对(initializeKeypair),如果不存在则自动创建,
${HOME}/.ollama/id_ed25519
为私钥,${HOME}/.ollama/id_ed25519.pub
为公钥 - 监听TCP端口(127.0.0.1:11434)
- 注册请求路由(s.GenerateRoutes)
- 初始化调度器(InitScheduler)
- 查找运行器(cpu_avx2、cpu_avx、cpu)
- 获取GPU信息(discover.GetGPUInfo)
- 启动HTTP服务器
拉取模型
ollama pull命令通过HTTP协议与服务器进行通信,执行模型拉取操作,模型文件默认存储在$HOME/.ollama/models
,通过环境变量OLLAMA_MODELS可以设置存储路径。需要确保OLLAMA_HOST指向的Ollama服务已经启动,如果Ollama服务没有启动,命令将无法执行。
js
c:\>ollama pull deepseek-r1:1.5b
Error: could not connect to ollama app, is it running?
命令
js
c:\>set OLLAMA_HOST=192.168.23.209:11434
c:\>ollama pull deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB
pulling 369ca498f347... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 387 B
pulling 6e4c38e1172f... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B
pulling a85fe2a2e58e... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
流程
客户端
js
pullCmd := &cobra.Command{
Use: "pull MODEL",
Short: "Pull a model from a registry",
Args: cobra.ExactArgs(1),
PreRunE: checkServerHeartbeat,
RunE: PullHandler,
}
pullCmd.Flags().Bool("insecure", false, "Use an insecure registry")
- 发送请求(HEAD /)检查服务器是否正常运行
- 发送请求(POST /api/pull)通知服务器从模型仓库下载模型,模型名称的格式为
[namespace/]model[:tag]
- 打印模型下载进度
服务器
- 解析请求(api.PullRequest)
- 下载清单文件(pullModelManifest)
- 解析清单文件,清单文件中digest字段对应文件名---manifest保存layer列表
- 根据清单文件的信息下载配置文件(downloadBlob)---blob (Binary Large Object,二进制大对象)
- 根据清单文件的信息下载每层的数据文件(downloadBlob)
- 校验数据文件(verifyBlob)
- 保存清单文件
- 发送响应
说明:模型相关文件
- 清单文件(json):
$HOME/.ollama/models/manifests/{host}/{namespace}/{model}/{tag}
,使用Docker镜像的清单文件格式application/vnd.docker.distribution.manifest.v2+json - 配置文件(json):
$HOME/.ollama/models/blobs/{ReplaceAll(manifest.config.digest, ":", "-")}
,使用Docker镜像的配置文件格式application/vnd.docker.container.image.v1+json - 模型文件(二进制):
$HOME/.ollama/models/blobs/{ReplaceAll(manifest.layers[0].digest, ":", "-")}
- 模板文件(文本):
$HOME/.ollama/models/blobs/{ReplaceAll(manifest.layers[1].digest, ":", "-")}
- 许可授权(文本):
$HOME/.ollama/models/blobs/{ReplaceAll(manifest.layers[2].digest, ":", "-")}
- 参数文件(json):
$HOME/.ollama/models/blobs/{ReplaceAll(manifest.layers[3].digest, ":", "-")}
js
[root@ubuntu:/]# tree /root/.ollama/
/root/.ollama/
├── history
├── id_ed25519
├── id_ed25519.pub
└── models
├── blobs
│ ├── sha256-369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150
│ ├── sha256-6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4
│ ├── sha256-74dc1847f8673a8a29dd52b053502bba6bfe6b29c8cc3516af4fb16ec63cd8d0
│ ├── sha256-832dd9e00a68dd83b3c3fb9f5588dad7dcf337a0db50f7d9483f310cd292e92e
│ ├── sha256-9ee36184e616dfc76df4f5dd66f908dbde6979524ae36e6cefb67f532f798cb8
│ ├── sha256-a85fe2a2e58e2426116d3686dfdc1a6ea58640c1e684069976aa730be6c1fa01
│ ├── sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc
│ ├── sha256-e94a8ecb9327ded799604a2e478659bc759230fe316c50d686358f932f52776c
│ └── sha256-f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588
└── manifests
├── hf-mirror.com
│ └── Qwen
│ └── Qwen2.5-0.5B-Instruct-GGUF
│ └── Q2_K
└── registry.ollama.ai
└── library
└── deepseek-r1
└── 1.5b
9 directories, 14 files
[root@ubuntu:/]# cat /root/.ollama/models/manifests/registry.ollama.ai/library/deepseek-r1/1.5b | jq
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:a85fe2a2e58e2426116d3686dfdc1a6ea58640c1e684069976aa730be6c1fa01",
"size": 487
},
"layers": [
{
"mediaType": "application/vnd.ollama.image.model",
"digest": "sha256:aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc",
"size": 1117320512
},
{
"mediaType": "application/vnd.ollama.image.template",
"digest": "sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150",
"size": 387
},
{
"mediaType": "application/vnd.ollama.image.license",
"digest": "sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4",
"size": 1065
},
{
"mediaType": "application/vnd.ollama.image.params",
"digest": "sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588",
"size": 148
}
]
}
[root@ubuntu:/]# cat /root/.ollama/models/blobs/sha256-a85fe2a2e58e2426116d3686dfdc1a6ea58640c1e684069976aa730be6c1fa01 | jq
{
"model_format": "gguf",
"model_family": "qwen2",
"model_families": [
"qwen2"
],
"model_type": "1.8B",
"file_type": "Q4_K_M",
"architecture": "amd64",
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc",
"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150",
"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4",
"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588"
]
}
}
运行模型
命令
js
# 非交互式会话:提供提示词,调用generate接口获得响应后立即退出,--verbose表示输出响应后追加统计信息
c:\>ollama run deepseek-r1:1.5b "你是谁?" --insecure --verbose
<think>
</think>
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
total duration: 1.774597451s
load duration: 19.346014ms
prompt eval count: 6 token(s)
prompt eval duration: 51ms
prompt eval rate: 117.65 tokens/s
eval count: 40 token(s)
eval duration: 1.702s
eval rate: 23.50 tokens/s
流程
客户端
js
runCmd := &cobra.Command{
Use: "run MODEL [PROMPT]",
Short: "Run a model",
Args: cobra.MinimumNArgs(1),
PreRunE: checkServerHeartbeat,
RunE: RunHandler,
}
runCmd.Flags().String("keepalive", "", "Duration to keep a model loaded (e.g. 5m)")
runCmd.Flags().Bool("verbose", false, "Show timings for response")
runCmd.Flags().Bool("insecure", false, "Use an insecure registry")
runCmd.Flags().Bool("nowordwrap", false, "Don't wrap words to the next line automatically")
runCmd.Flags().String("format", "", "Response format (e.g. json)")
- 解析命令行参数
- 发送请求(HEAD /)检查服务器是否正常运行
- 发送请求(POST /api/show)查看模型信息。如果没有对应模型则发送请求(POST /api/pull)下载模型,再发送请求(POST /api/show)查看模型信息
- 发送请求(POST /api/generate)
- 输出响应
服务器
- 解析请求参数(api.GenerateRequest)
- 验证模型名称
- 获取模型信息
- 验证模型能力和选项
- 获取运行器,如果没有对应运行器,则启动运行器
- 将请求转发给运行器(
POST http://127.0.0.1:{port}/completion
),真正进行推理计算 - 将运行器的响应转发给客户端
js
[root@ubuntu:/]# ps -ef | grep -v grep | grep ollama
root 2256836 2207750 0 Feb20 pts/0 00:00:02 ollama serve
root 2280075 2256836 8 08:41 pts/0 00:00:29 /usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --verbose --threads 4 --no-mmap --parallel 4 --port 46385
[root@ubuntu:/]# netstat -antp | grep -E '2256836|2280075'
tcp 0 0 127.0.0.1:46385 0.0.0.0:* LISTEN 2280075/ollama_llam
tcp6 0 0 :::11434 :::* LISTEN 2256836/ollama
- 模型路径(--model)
- 上下文大小(--ctx-size)
- 批量大小(--batch-size)
- 线程数(--threads)
- 并行度(--parallel)
- 不使用内存映射(--no-mmap)
- 启用快速注意力机制,即Flash Attention(--flash-attn)
- 访问端口(--port)
- 详细输出(--verbose)
说明1:运行器(ollama_llama_server)负责加载模型、通过HTTP提供模型推理服务(只能通过本机访问,可以通过Nginx反向代理对外暴露服务)。ollama_llama_server提供以下5个REST API接口
js
GET http://127.0.0.1:{port}/health
POST http://127.0.0.1:{port}/completion
POST http://127.0.0.1:{port}/embedding
POST http://127.0.0.1:{port}/tokenize
POST http://127.0.0.1:{port}/detokenize
说明2:源码编译时自动编译生成ollama_llama_server,源码入口详见github.com\ollama\ollama\cmd\runner\main.go。
js
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\" " -trimpath -tags "avx" -o llama/build/linux-amd64/runners/cpu_avx/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\" " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.7-0-ga420a45\" " -trimpath -o ollama .