win11本地部署openclaw实操第3集-用wsl方式部署

1. WSL 安装openclaw

发现一个坑,花了很多时间。关键一步必须要做,必须先关闭之前的服务,不然就会安装失败

1.1 停WSL外面的服务

bash 复制代码
PS C:\Users\Administrator> openclaw gateway stop

🦞 OpenClaw 2026.2.22-2 (45febec) --- The UNIX philosophy meets your DMs.

Stopped Scheduled Task: OpenClaw Gateway
PS C:\Users\Administrator>

1.2 WSL内正常安装

bash 复制代码
(base) gpu3090@DESKTOP-8IU6393:~/openclaw$ curl -fsSL https://openclaw.ai/install.sh | bash

  🦞 OpenClaw Installer
  Welcome to the command line: where dreams compile and confidence segfaults.

✓ Detected: linux
→ Detected a OpenClaw source checkout in: /home/gpu3090/openclaw
Choose install method:
  1) Update this checkout (git) and use it
  2) Install global via npm (migrate away from git)
Enter 1 or 2:
1

Install plan
OS: linux
Install method: git
Requested version: latest
Git directory: /home/gpu3090/openclaw
Git update: 1
Detected checkout: /home/gpu3090/openclaw
· Existing OpenClaw installation detected, upgrading

[1/3] Preparing environment
✓ Node.js v22.12.0 found
· Active Node.js: v22.12.0 (/home/gpu3090/.nvm/versions/node/v22.12.0/bin/node)
· Active npm: 10.9.0 (/home/gpu3090/.nvm/versions/node/v22.12.0/bin/npm)

[2/3] Installing OpenClaw
· Installing OpenClaw from git checkout: /home/gpu3090/openclaw
✓ Git already installed
✓ pnpm ready (pnpm)
· Repo has local changes; skipping git pull
✓ OpenClaw wrapper installed to $HOME/.local/bin/openclaw
· This checkout uses pnpm --- run pnpm install (or corepack pnpm install) for deps

[3/3] Finalizing setup

! PATH missing user-local bin dir (~/.local/bin): /home/gpu3090/.local/bin
  This can make openclaw show as "command not found" in new terminals.
  Fix (zsh: ~/.zshrc, bash: ~/.bashrc):
    export PATH="/home/gpu3090/.local/bin:$PATH"
· Running doctor to migrate settings
✓ Doctor complete

🦞 OpenClaw installed successfully (2026.2.23)!
Upgraded! Peter fixed stuff. Blame him if it breaks.


Source install details
Checkout: /home/gpu3090/openclaw
Wrapper: /home/gpu3090/.local/bin/openclaw
Update command: openclaw update --restart
Switch to npm: curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method npm

🦞 OpenClaw 2026.2.23 (c92c3ad)
   If it's repetitive, I'll automate it; if it's hard, I'll bring jokes and a rollback plan.

Dashboard URL: http://127.0.0.1:18789/
Copied to clipboard.
No GUI detected. Open from your computer:
ssh -N -L 18789:127.0.0.1:18789 gpu3090@<host>
Then open:
http://localhost:18789/
Docs:
https://docs.openclaw.ai/gateway/remote
https://docs.openclaw.ai/web/control-ui

FAQ: https://docs.openclaw.ai/start/faq
(base) gpu3090@DESKTOP-8IU6393:~/openclaw$

2 WSL内安装ollama

bash 复制代码
(base) gpu3090@DESKTOP-8IU6393:~/openclaw$ curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
[sudo] password for gpu3090:
Sorry, try again.
[sudo] password for gpu3090:
Sorry, try again.
[sudo] password for gpu3090:
>>> Downloading ollama-linux-amd64.tar.zst
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> Nvidia GPU detected.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
(base) gpu3090@DESKTOP-8IU6393:~/openclaw$ sudo systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Mon 2026-02-23 22:14:32 CST; 37ms ago
    Process: 115390 ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE)
   Main PID: 115390 (code=exited, status=1/FAILURE)
        CPU: 26ms
(base) gpu3090@DESKTOP-8IU6393:~/openclaw$ sudo systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Mon 2026-02-23 22:15:00 CST; 520ms ago
   Main PID: 115525 (ollama)
      Tasks: 23 (limit: 77081)
     Memory: 144.8M (peak: 144.8M)
        CPU: 346ms
     CGroup: /system.slice/ollama.service
             ├─115525 /usr/local/bin/ollama serve
             └─115551 /usr/local/bin/ollama runner --ollama-engine --port 46275

Feb 23 22:15:00 DESKTOP-8IU6393 systemd[1]: ollama.service: Scheduled restart job, restart counter is at 11>
Feb 23 22:15:00 DESKTOP-8IU6393 systemd[1]: Started ollama.service - Ollama Service.
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.146+08:00 level=INFO source=routes>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.146+08:00 level=INFO source=routes>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.148+08:00 level=INFO source=images>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.148+08:00 level=INFO source=images>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.148+08:00 level=INFO source=routes>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.149+08:00 level=INFO source=runner>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.156+08:00 level=INFO source=server>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.420+08:00 level=INFO source=server>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.633+08:00 level=INFO source=runner>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.635+08:00 level=INFO source=server>
Feb 23 22:15:00 DESKTOP-8IU6393 ollama[115525]: time=2026-02-23T22:15:00.645+08:00 level=INFO source=server>

(base) gpu3090@DESKTOP-8IU6393:/mnt/e/AI/Ollama$ LLAMA_MODELS=/mnt/e/AI/Ollama OLLAMA_HOST=0.0.0.0:12346 OLLAMA_CONTEXT_LENGTH=64000  ollama serve
time=2026-02-23T23:42:56.837+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://127.0.0.1:7897 HTTP_PROXY:http://127.0.0.1:7897 NO_PROXY:172.31.*,172.30.*,172.29.*,172.28.*,172.27.*,172.26.*,172.25.*,172.24.*,172.23.*,172.22.*,172.21.*,172.20.*,172.19.*,172.18.*,172.17.*,172.16.*,10.*,192.168.*,127.*,localhost,<local> OLLAMA_CONTEXT_LENGTH:64000 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12346 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/gpu3090/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy:http://127.0.0.1:7897 https_proxy:http://127.0.0.1:7897 no_proxy:172.31.*,172.30.*,172.29.*,172.28.*,172.27.*,172.26.*,172.25.*,172.24.*,172.23.*,172.22.*,172.21.*,172.20.*,172.19.*,172.18.*,172.17.*,172.16.*,10.*,192.168.*,127.*,localhost,<local>]"
time=2026-02-23T23:42:56.838+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-23T23:42:56.842+08:00 level=INFO source=images.go:473 msg="total blobs: 0"
time=2026-02-23T23:42:56.842+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-23T23:42:56.842+08:00 level=INFO source=routes.go:1718 msg="Listening on [::]:12346 (version 0.16.3)"
time=2026-02-23T23:42:56.849+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-23T23:42:56.850+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45071"
time=2026-02-23T23:42:58.218+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44649"
time=2026-02-23T23:42:58.425+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-02-23T23:42:58.425+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46509"
time=2026-02-23T23:42:58.426+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45479"
time=2026-02-23T23:42:58.661+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-67135303-3c02-f35c-3e58-dc2c1b4892fc filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4090 D" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:07:00.0 type=discrete total="24.0 GiB" available="20.7 GiB"
time=2026-02-23T23:42:58.661+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768
bash 复制代码
(base) gpu3090@DESKTOP-8IU6393:~$ OLLAMA_HOST=http://localhost:12346 ollama list
NAME                                 ID              SIZE      MODIFIED
gpt-oss:20b                          17052f91a42e    13 GB     3 days ago
glm-4.7-flash:latest                 d1a8a26252f1    19 GB     4 days ago
qwen3:4b-instruct-2507-q8_0          aa7252f68dda    4.3 GB    3 months ago
security1:latest                     72a9312a47ab    4.3 GB    3 months ago
qwen3:8b                             500a1f067a9f    5.2 GB    3 months ago
dengcao/Qwen3-Reranker-8B:Q5_K_M     23d85d712bbc    5.8 GB    8 months ago
dengcao/Qwen3-Embedding-8B:Q5_K_M    6cd08f0d9bdb    5.4 GB    8 months ago
qwen3:14b                            bdbd181c33f2    9.3 GB    8 months ago
mxbai-embed-large:latest             468836162de7    669 MB    8 months ago
mistral-nemo:latest                  994f3b8b7801    7.1 GB    12 months ago
nomic-embed-text:latest              0a109f422b47    274 MB    12 months ago
deepseek-r1:7b                       0a8c26691023    4.7 GB    12 months ago
deepseek-r1:8b                       28f8fd6cdc67    4.9 GB    12 months ago
deepseek-r1:1.5b                     a42b25d8c10a    1.1 GB    12 months ago
deepseek-r1:14b                      ea35dfe18182    9.0 GB    12 months ago
deepseek-r1:32b                      38056bbcbb2d    19 GB     12 months ago

3 遇到一个问题我的ollama也在wsl外安装了,我想共享大模型,不想重复下载了

解决

bash 复制代码
```mkdir -p ~/.ollama && ln -s /mnt/e/AI/Ollama ~/.ollama/models
你执行的 mkdir -p ~/.ollama && ln -s /mnt/e/AI/Ollama ~/.ollama/models 是将 Windows E 盘的AI/Ollama目录软链接为 WSL 中 Ollama 的模型目录,这能解决WSL 和 Windows 共享 Ollama 模型的核心需求,但需要注意软链接的权限、路径格式和 Ollama 进程适配问题。我会帮你验证链接是否生效,并修复可能的异常。
第一步:验证软链接是否创建成功(关键)
bash
运行
# 1. 查看软链接状态(确认指向正确路径)
ls -la ~/.ollama/models
# 正常输出(带箭头指向Windows路径):
# lrwxrwxrwx 1 gpu3090 gpu3090 16 Feb 24 00:00 ~/.ollama/models -> /mnt/e/AI/Ollama

# 2. 验证Windows路径是否可访问(WSL中)
ls -la /mnt/e/AI/Ollama
# 正常输出:能看到models目录下的blobs、manifests等子目录(模型数据)
第二步:修复软链接可能的问题(权限 / 路径)
问题 1:软链接权限不足(Ollama 无法读取)
bash
运行
# 赋予Windows挂载目录可读可执行权限(WSL访问Windows目录必需)
sudo chmod -R 755 /mnt/e/AI/Ollama

# 确保Ollama所属用户能访问
sudo chown -R gpu3090:gpu3090 ~/.ollama/models
问题 2:Windows 路径包含空格 / 中文(导致链接失效)
如果/mnt/e/AI/Ollama路径有空格 / 中文,需用引号包裹或转义:
bash
运行
# 示例:路径是/mnt/e/AI/Ollama Models → 转义空格
ln -s "/mnt/e/AI/Ollama Models" ~/.ollama/models

# 或重新创建链接(删除旧链接)
rm ~/.ollama/models
mkdir -p ~/.ollama
ln -s "/mnt/e/AI/Ollama" ~/.ollama/models
第三步:重启 Ollama 加载 Windows 目录的模型
bash
运行
# 1. 终止所有Ollama进程
sudo killall -9 ollama

# 2. 启动Ollama(指定软链接的模型目录+12346端口)
OLLAMA_HOST=0.0.0.0:12346 OLLAMA_MODELS=/home/gpu3090/.ollama/models ollama serve &

# 3. 验证模型是否加载(此时应显示Windows目录中的gpt-oss:20b)
OLLAMA_HOST=http://localhost:12346 ollama list
第四步:验证 OpenClaw 调用 Windows 目录的模型
bash
运行
# 1. 确保OpenClaw配置指向12346端口
nano ~/.openclaw/config.json
# 确认:"baseUrl": "http://localhost:12346","model": "gpt-oss:20b"

# 2. 启动OpenClaw聊天
openclaw chat --model ollama:gpt-oss:20b
相关推荐
tq10861 小时前
Bash is NOT Everything
人工智能
云飞云共享云桌面1 小时前
10人SolidWorks设计团队如何提升SolidWorks软件利用率
大数据·linux·运维·服务器·网络·人工智能
机器视觉的发动机2 小时前
图像处理-机器视觉算法中的数学基础
开发语言·人工智能·算法·决策树·机器学习·视觉检测·机器视觉
Nile2 小时前
Pi Coding Agent 编码工具的定制
人工智能·agent·ai编程
码农三叔2 小时前
(3-1-01)视觉感知:从像素到语义:图像处理基础
图像处理·人工智能·嵌入式硬件·机器人·人机交互·人形机器人
健忘的派大星3 小时前
需求激增800%!2025年第一硬通货:懂大模型、云计算和硬件的“前沿部署工程师”!
人工智能·算法·架构·langchain·云计算·大模型学习·大模型教程
Amanda_yan3 小时前
云计算和边缘计算到底有什么不同?一文讲清楚
人工智能·云计算·边缘计算
拓端研究室4 小时前
2026年人工智能AI未来报告:智能体、元宇宙、教育、商业化落地|附400+份报告PDF、数据、可视化模板汇总下载
人工智能·百度
橙露10 小时前
数据特征工程:缺失值、异常值、标准化一站式解决方案
人工智能·机器学习