AgenticSeek开源的完全本地的 Manus AI。无需 API,享受一个自主代理,它可以思考、浏览 Web 和编码,只需支付电费。

​一、软件介绍

文末提供程序和源码下载

AgenticSeek开源的完全本地的 Manus AI。无需 API,享受一个自主代理,它可以思考、浏览 Web 和编码,只需支付电费。这款支持语音的 AI 助手是 Manus AI 的 100% 本地替代品,可自主浏览网页、编写代码和计划任务,同时将所有数据保留在您的设备上。它专为本地推理模型量身定制,完全在您的硬件上运行,确保完全隐私和零云依赖。

二、为什么选择 AgenticSeek?

  • 🔒 Fully Local & Private - Everything runs on your machine --- no cloud, no data sharing. Your files, conversations, and searches stay private.

    🔒 完全本地和私有 - 一切都在您的机器上运行 - 没有云,没有数据共享。您的文件、对话和搜索将保持私密。

  • 🌐 Smart Web Browsing - AgenticSeek can browse the internet by itself --- search, read, extract info, fill web form --- all hands-free.

    🌐 智能网页浏览 - AgenticSeek 可以自行浏览互联网 - 搜索、阅读、提取信息、填写网页表格 - 所有这些都是免提的。

  • 💻 Autonomous Coding Assistant - Need code? It can write, debug, and run programs in Python, C, Go, Java, and more --- all without supervision.

    💻 Autonomous Coding Assistant - 需要代码?它可以用 Python、C、Go、Java 等语言编写、调试和运行程序 --- 所有这些都无需监督。

  • 🧠 Smart Agent Selection - You ask, it figures out the best agent for the job automatically. Like having a team of experts ready to help.

    🧠 智能代理选择 - 您询问,它会自动找出最适合该工作的代理。就像有一个随时准备提供帮助的专家团队。

  • 📋 Plans & Executes Complex Tasks - From trip planning to complex projects --- it can split big tasks into steps and get things done using multiple AI agents.

    📋 计划并执行复杂的任务 - 从旅行计划到复杂的项目 - 它可以将大任务分成步骤,并使用多个 AI 代理完成工作。

  • 🎙️ Voice-Enabled - Clean, fast, futuristic voice and speech to text allowing you to talk to it like it's your personal AI from a sci-fi movie

    🎙️ 支持语音 - 干净、快速、未来主义的语音和语音到文本,让您可以像科幻电影中的个人 AI 一样与它交谈

三、Installation 安装

Make sure you have chrome driver, docker and python3.10 installed.
确保您已安装 chrome 驱动程序、docker 和 python3.10。

We highly advice you use exactly python3.10 for the setup. Dependencies error might happen otherwise.
我们强烈建议您完全使用 python3.10 进行设置。否则可能会发生依赖项错误。

For issues related to chrome driver, see the Chromedriver section.
有关 Chrome 驱动程序的问题,请参阅 Chromedriver 部分。

1️⃣ Clone the repository and setup

1️⃣ 克隆存储库并设置

复制代码
git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env

2️ Create a virtual env

2️ 创建虚拟环境

复制代码
python3 -m venv agentic_seek_env
source agentic_seek_env/bin/activate
# On Windows: agentic_seek_env\Scripts\activate

3️⃣ Install package

3️⃣ 安装软件包

Ensure Python, Docker and docker compose, and Google chrome are installed.
确保已安装 Python、Docker 和 docker compose 以及 Google Chrome。

We recommand Python 3.10.0.
建议使用 Python 3.10.0。

Automatic Installation (Recommanded):
自动安装 (推荐):

For Linux/Macos: 对于 Linux/Macos:

复制代码
./install.sh

For windows: 对于 Windows:

复制代码
./install.bat

Manually: 手动地:

Note: For any OS, ensure the ChromeDriver you install matches your installed Chrome version. Run google-chrome --version. See known issues if you have chrome >135
注意:对于任何作系统,请确保您安装的 ChromeDriver 与您安装的 Chrome 版本相匹配。运行 google-chrome --version。如果您使用的是 chrome >135,请参阅已知问题

  • Linux : Linux 的:

Update Package List: sudo apt update
更新包列表:sudo apt update

Install Dependencies: sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1
安装依赖项: sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1

Install ChromeDriver matching your Chrome browser version: sudo apt install -y chromium-chromedriver
安装与您的 Chrome 浏览器版本匹配的 ChromeDriver: sudo apt install -y chromium-chromedriver

Install requirements: pip3 install -r requirements.txt
安装要求: pip3 install -r requirements.txt

  • Macos : 苹果科斯

Update brew : brew update
Update brew : brew update (更新

Install chromedriver : brew install --cask chromedriver
安装 chromedriver : brew install --cask chromedriver

Install portaudio: brew install portaudio
安装 portaudio:brew install portaudio

Upgrade pip : python3 -m pip install --upgrade pip 升级 pip : python3 -m pip install --upgrade pip

Upgrade wheel : : pip3 install --upgrade setuptools wheel
升级轮 : : pip3 install --upgrade setuptools wheel

Install requirements: pip3 install -r requirements.txt
安装要求: pip3 install -r requirements.txt

  • Windows : Windows (窗口):

Install pyreadline3 pip install pyreadline3
安装 pyreadline3 pip 安装 pyreadline3

Install portaudio manually (e.g., via vcpkg or prebuilt binaries) and then run: pip install pyaudio
手动安装 portaudio(例如,通过 vcpkg 或预构建的二进制文件),然后运行:pip install pyaudio

Download and install chromedriver manually from: https://sites.google.com/chromium.org/driver/getting-started
从以下位置手动下载并安装 chromedriver:https://sites.google.com/chromium.org/driver/getting-started

Place chromedriver in a directory included in your PATH.
将 chromedriver 放在 PATH 中包含的目录中。

Install requirements: pip3 install -r requirements.txt
安装要求: pip3 install -r requirements.txt


四、在计算机上本地运行 LLM 的设置

Hardware Requirements: 硬件要求:

To run LLMs locally, you'll need sufficient hardware. At a minimum, a GPU capable of running Qwen/Deepseek 14B is required. See the FAQ for detailed model/performance recommendations.
要在本地运行 LLM,您需要足够的硬件。至少需要能够运行 Qwen/Deepseek 14B 的 GPU。有关详细的型号/性能建议,请参阅 FAQ。

Setup your local provider
设置您的本地提供商

Start your local provider, for example with ollama:
启动您的本地提供商,例如 ollama:

复制代码
ollama serve

See below for a list of local supported provider.
有关本地支持的提供商列表,请参阅下文。

Update the config.ini 更新 config.ini

Change the config.ini file to set the provider_name to a supported provider and provider_model to a LLM supported by your provider. We recommand reasoning model such as Qwen or Deepseek .
更改 config.ini 文件,将 provider_name 设置为支持的提供商,并将 provider_model 设置为提供商支持的 LLM。我们推荐 QwenDeepseek 等推理模型。

See the FAQ at the end of the README for required hardware.
请参阅 README 末尾的常见问题解答,了解所需的硬件。

复制代码
[MAIN]
is_local = True # Whenever you are running locally or with remote provider.
provider_name = ollama # or lm-studio, openai, etc..
provider_model = deepseek-r1:14b # choose a model that fit your hardware
provider_server_address = 127.0.0.1:11434
agent_name = Jarvis # name of your AI
recover_last_session = True # whenever to recover the previous session
save_session = True # whenever to remember the current session
speak = True # text to speech
listen = False # Speech to text, only for CLI
work_dir =  /Users/mlg/Documents/workspace # The workspace for AgenticSeek.
jarvis_personality = False # Whenever to use a more "Jarvis" like personality (experimental)
languages = en zh # The list of languages, Text to speech will default to the first language on the list
[BROWSER]
headless_browser = True # Whenever to use headless browser, recommanded only if you use web interface.
stealth_mode = True # Use undetected selenium to reduce browser detection

Warning: Do NOT set provider_name to openai if using LM-studio for running LLMs. Set it to lm-studio.
警告: 如果使用 LM-studio 运行 LLM,请不要 将 provider_name 设置为 openai,将其设置为 lm-studio

Note: Some provider (eg: lm-studio) require you to have http:// in front of the IP. For example http://127.0.0.1:1234
注意:某些提供商(例如:lm-studio)要求您在 IP 前面有 http://。例如 http://127.0.0.1:1234

List of local providers 本地提供商列表

Provider 供应商 Local? 当地? Description 描述
ollama 哦,羊驼 Yes 是的 Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作为 LLM 提供程序,轻松在本地运行 LLM
lm-studio LM-工作室 Yes 是的 Run LLM locally with LM studio (set provider_name to lm-studio) 使用 LM studio 在本地运行 LLM(将 provider_name 设置为 lm-studio
openai OpenAI 公司 Yes 是的 Use openai compatible API (eg: llama.cpp server) 使用兼容 openai 的 API(例如:llama.cpp 服务器)

Next step: Start services and run AgenticSeek
下一步:启动服务并运行 AgenticSeek

See the Known issues section if you are having issues
如果您遇到问题,请参阅已知问题部分

See the Run with an API section if your hardware can't run deepseek locally
如果您的硬件无法在本地运行 deepseek ,请参阅使用 API 运行部分

See the Config section for detailled config file explanation.
有关配置文件的详细解释,请参阅 Config 部分。


五、使用 API 运行的设置

Set the desired provider in the config.ini. See below for a list of API providers.
config.ini 中设置所需的提供程序。有关 API 提供程序的列表,请参阅下文。

复制代码
[MAIN]
is_local = False
provider_name = google
provider_model = gemini-2.0-flash
provider_server_address = 127.0.0.1:5000 # doesn't matter

Warning: Make sure there is not trailing space in the config.
警告: 确保配置中没有尾随空格。

Export your API key: export <<PROVIDER>>_API_KEY="xxx"
导出您的 API 密钥: export <<PROVIDER>>_API_KEY="xxx"

Example: export TOGETHER_API_KEY="xxxxx"
示例:export TOGETHER_API_KEY="xxxxx"

List of API providers API 提供者列表

Provider 供应商 Local? 当地? Description 描述
openai OpenAI 公司 Depends 取决于 Use ChatGPT API 使用 ChatGPT API
deepseek 深度搜索 No 不 Deepseek API (non-private) Deepseek API(非私有)
huggingface 拥抱脸 No 不 Hugging-Face API (non-private) Hugging-Face API(非私有)
togetherAI 一起 AI No 不 Use together AI API (non-private) 一起使用 AI API (非私有)
google 谷歌 No 不 Use google gemini API (non-private) 使用 google gemini API(非私有)

We advice against using gpt-4o or other closedAI models , performance are poor for web browsing and task planning.
我们建议不要使用 gpt-4o 或其他 closedAI 模型,因为 Web 浏览和任务规划的性能很差。

Please also note that coding/bash might fail with gemini, it seem to ignore our prompt for format to respect, which are optimized for deepseek r1.
另请注意,gemini 的 coding/bash 可能会失败,它似乎忽略了我们对 format to respect 的提示,它针对 deepseek r1 进行了优化。

Next step: Start services and run AgenticSeek
下一步:启动服务并运行 AgenticSeek

See the Known issues section if you are having issues
如果您遇到问题,请参阅已知问题部分

See the Config section for detailled config file explanation.
有关配置文件的详细解释,请参阅 Config 部分。


Start services and Run 启动服务并运行

Activate your python env if needed.
如果需要,请激活您的 python env。

复制代码
source agentic_seek_env/bin/activate

Start required services. This will start all services from the docker-compose.yml, including: - searxng - redis (required by searxng) - frontend
启动所需的服务。这将从 docker-compose.yml 启动所有服务,包括: - searxng - redis(searxng 需要)- 前端

复制代码
sudo ./start_services.sh # MacOS
start ./start_services.cmd # Window

Options 1: Run with the CLI interface.
选项 1: 使用 CLI 界面运行。

复制代码
python3 cli.py

We advice you set headless_browser to False in the config.ini for CLI mode.
我们建议您在 CLI 模式的 config.ini 中将 headless_browser 设置为 False。

Options 2: Run with the Web interface.
选项 2: 使用 Web 界面运行。

Start the backend. 启动后端。

复制代码
python3 api.py

Go to http://localhost:3000/ and you should see the web interface.
转到 http://localhost:3000/ 您应该会看到 Web 界面。


Usage 用法

Make sure the services are up and running with ./start_services.sh and run the AgenticSeek with python3 cli.py for CLI mode or python3 api.py then go to localhost:3000 for web interface.
确保服务已启动并运行 ./start_services.sh,并使用 python3 cli.py 运行 AgenticSeek(用于 CLI 模式)或 python3 api.py 然后转到 localhost:3000 用于 Web 界面。

You can also use speech to text by setting listen = True in the config. Only for CLI mode.
您还可以通过在配置中设置 listen = True 来使用语音转文本。仅适用于 CLI 模式。

To exit, simply say/type goodbye.
要退出,只需说/键入 goodbye

Here are some example usage:
以下是一些示例用法:

Make a snake game in python!
用 python 制作贪吃蛇游戏!

Search the web for top cafes in Rennes, France, and save a list of three with their addresses in rennes_cafes.txt.
在网上搜索法国雷恩的顶级咖啡馆,并保存三家咖啡馆的列表以及它们在 rennes_cafes.txt 中的地址。

Write a Go program to calculate the factorial of a number, save it as factorial.go in your workspace
编写一个 Go 程序来计算一个数字的阶乘,在你的工作区中将其保存为 factorial.go

Search my summer_pictures folder for all JPG files, rename them with today's date, and save a list of renamed files in photos_list.txt
在我的 summer_pictures 文件夹中搜索所有 JPG 文件,使用今天的日期重命名它们,并在 photos_list.txt 中保存重命名文件的列表

Search online for popular sci-fi movies from 2024 and pick three to watch tonight. Save the list in movie_night.txt.
在线搜索 2024 年的热门科幻电影,并选择今晚观看的三部。将列表保存在 movie_night.txt 中。

Search the web for the latest AI news articles from 2025, select three, and write a Python script to scrape their titles and summaries. Save the script as news_scraper.py and the summaries in ai_news.txt in /home/projects
在网上搜索 2025 年最新的 AI 新闻文章,选择三篇,然后编写一个 Python 脚本来抓取它们的标题和摘要。将脚本另存为 news_scraper.py 并将摘要保存在 /home/projects 中 ai_news.txt

Friday, search the web for a free stock price API, register with [email protected] then write a Python script to fetch using the API daily prices for Tesla, and save the results in stock_prices.csv
星期五,在 Web 上搜索免费的股票价格 API,注册 [email protected],然后编写一个 Python 脚本以使用 API 获取特斯拉的每日价格,并将结果保存在 stock_prices.csv

Note that form filling capabilities are still experimental and might fail.
请注意,表单填写功能仍处于试验阶段,可能会失败。

After you type your query, AgenticSeek will allocate the best agent for the task.
键入查询后,AgenticSeek 将为任务分配最佳代理。

Because this is an early prototype, the agent routing system might not always allocate the right agent based on your query.
由于这是一个早期原型,因此代理路由系统可能并不总是根据您的查询分配正确的代理。

Therefore, you should be very explicit in what you want and how the AI might proceed for example if you want it to conduct a web search, do not say:
因此,您应该非常明确地说明您想要什么以及 AI 可能会如何进行,例如,如果您希望它进行网络搜索,请不要说:

Do you know some good countries for solo-travel?

Instead, ask: 相反,请询问:

Do a web search and find out which are the best country for solo-travel


**Setup to run the LLM on your own server

设置以在您自己的服务器上运行 LLM**

If you have a powerful computer or a server that you can use, but you want to use it from your laptop you have the options to run the LLM on a remote server using our custom llm server.
如果您有一台功能强大的计算机或可以使用的服务器,但您想在笔记本电脑上使用它,您可以选择使用我们的自定义 llm 服务器在远程服务器上运行 LLM。

On your "server" that will run the AI model, get the ip address
在将运行 AI 模型的 "服务器" 上,获取 IP 地址

复制代码
ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1 # local ip
curl https://ipinfo.io/ip # public ip

Note: For Windows or macOS, use ipconfig or ifconfig respectively to find the IP address.
注意:对于 Windows 或 macOS,请分别使用 ipconfig 或 ifconfig 查找 IP 地址。

Clone the repository and enter the server/folder.
克隆存储库并输入 server/folder。

复制代码
git clone --depth 1 https://github.com/Fosowl/agenticSeek.git
cd agenticSeek/llm_server/

Install server specific requirements:
安装 Server 特定要求:

复制代码
pip3 install -r requirements.txt

Run the server script.
运行 server 脚本。

复制代码
python3 app.py --provider ollama --port 3333

You have the choice between using ollama and llamacpp as a LLM service.
您可以选择使用 ollamallamacpp 作为 LLM 服务。

Now on your personal computer:
现在在您的个人计算机上:

Change the config.ini file to set the provider_name to server and provider_model to deepseek-r1:xxb. Set the provider_server_address to the ip address of the machine that will run the model.
更改 config.ini 文件,将 provider_name 设置为 server 将 provider_model 设置为 deepseek-r1:xxb。将 provider_server_address 设置为将运行模型的计算机的 IP 地址。

复制代码
[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:70b
provider_server_address = x.x.x.x:3333

Next step: Start services and run AgenticSeek
下一步:启动服务并运行 AgenticSeek


Speech to Text 语音到文本

Please note that currently speech to text only work in english.
请注意,目前语音转文本仅支持英语。

The speech-to-text functionality is disabled by default. To enable it, set the listen option to True in the config.ini file:
默认情况下,语音转文本功能处于禁用状态。要启用它,请在 config.ini 文件中将 listen 选项设置为 True:

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>listen = True
</code></span></span></span></span>

When enabled, the speech-to-text feature listens for a trigger keyword, which is the agent's name, before it begins processing your input. You can customize the agent's name by updating the agent_name value in the config.ini file:
启用后,语音转文本功能会在开始处理您的输入之前侦听触发器关键字,即代理的名称。您可以通过更新 config.ini 文件中的 agent_name 值来自定义代理的名称:

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>agent_name = Friday
</code></span></span></span></span>

For optimal recognition, we recommend using a common English name like "John" or "Emma" as the agent name
为了获得最佳识别效果,我们建议使用常见的英文名称,如"John"或"Emma"作为代理名称

Once you see the transcript start to appear, say the agent's name aloud to wake it up (e.g., "Friday").
看到转录文本开始出现后,大声说出代理的姓名以将其唤醒(例如,"星期五")。

Speak your query clearly.
清楚地说出您的问题。

End your request with a confirmation phrase to signal the system to proceed. Examples of confirmation phrases include:
以确认短语结束您的请求,以指示系统继续。确认短语的示例包括:

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>"do it", "go ahead", "execute", "run", "start", "thanks", "would ya", "please", "okay?", "proceed", "continue", "go on", "do that", "go it", "do you understand?"
</code></span></span></span></span>

Config 配置

Example config: 示例配置:

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:11434
agent_name = Friday
recover_last_session = False
save_session = False
speak = False
listen = False
work_dir =  /Users/mlg/Documents/ai_folder
jarvis_personality = False
languages = en zh
[BROWSER]
headless_browser = False
stealth_mode = False
</code></span></span></span></span>

Explanation : 说明

  • is_local -> Runs the agent locally (True) or on a remote server (False).

    is_local -> 在本地 (True) 或在远程服务器上运行代理 (False)。

  • provider_name -> The provider to use (one of: ollama, server, lm-studio, deepseek-api)

    provider_name -> 要使用的提供程序(以下之一:ollamaserverlm-studiodeepseek-api

  • provider_model -> The model used, e.g., deepseek-r1:32b.

    provider_model -> 使用的模型,例如 deepseek-r1:32b。

  • provider_server_address -> Server address, e.g., 127.0.0.1:11434 for local. Set to anything for non-local API.

    provider_server_address -> 服务器地址,例如,127.0.0.1:11434 表示本地。对于非本地 API,设置为 anything。

  • agent_name -> Name of the agent, e.g., Friday. Used as a trigger word for TTS.

    agent_name -> 代理的名称,例如 Friday。用作 TTS 的触发词。

  • recover_last_session -> Restarts from last session (True) or not (False).

    recover_last_session -> 从上一个会话重新开始 (True) 或不 (False)。

  • save_session -> Saves session data (True) or not (False).

    save_session -> 保存会话数据 (True) 或不保存 (False)。

  • speak -> Enables voice output (True) or not (False).

    speak -> 启用语音输出 (True) 或不启用 (False)。

  • listen -> listen to voice input (True) or not (False).

    listen -> 监听语音输入 (True) 或不监听 (False)。

  • work_dir -> Folder the AI will have access to. eg: /Users/user/Documents/.

    work_dir -> AI 将有权访问的文件夹。例如:/Users/user/Documents/。

  • jarvis_personality -> Uses a JARVIS-like personality (True) or not (False). This simply change the prompt file.

    jarvis_personality -> 使用类似 JARVIS 的个性 (True) 或不 (False)。这只是更改提示文件。

  • languages -> The list of supported language, needed for the llm router to work properly, avoid putting too many or too similar languages.

    languages -> llm 路由器正常工作所需的支持语言列表,避免放置太多或太相似的语言。

  • headless_browser -> Runs browser without a visible window (True) or not (False).

    headless_browser -> 在没有可见窗口 (True) 或不显示 (False) 的情况下运行浏览器。

  • stealth_mode -> Make bot detector time harder. Only downside is you have to manually install the anticaptcha extension.

    stealth_mode -> 使机器人检测器时间更难。唯一的缺点是您必须手动安装 anticaptcha 扩展。

  • languages -> List of supported languages. Required for agent routing system. The longer the languages list the more model will be downloaded.

    languages - > 支持的语言列表。代理路由系统是必需的。语言列表越长,下载的模型就越多。

Providers 供应商

The table below show the available providers:
下表显示了可用的提供商:

Provider 供应商 Local? 当地? Description 描述
ollama 哦,羊驼 Yes 是的 Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作为 LLM 提供程序,轻松在本地运行 LLM
server 服务器 Yes 是的 Host the model on another machine, run your local machine 在另一台计算机上托管模型,运行本地计算机
lm-studio LM-工作室 Yes 是的 Run LLM locally with LM studio (lm-studio) 使用 LM studio (lm-studio) 在本地运行 LLM
openai OpenAI 公司 Depends 取决于 Use ChatGPT API (non-private) or openai compatible API 使用 ChatGPT API(非私有)或 openai 兼容 API
deepseek-api 深度搜索 API No 不 Deepseek API (non-private) Deepseek API(非私有)
huggingface 拥抱脸 No 不 Hugging-Face API (non-private) Hugging-Face API(非私有)
togetherAI 一起 AI No 不 Use together AI API (non-private) 一起使用 AI API (非私有)
google 谷歌 No 不 Use google gemini API (non-private) 使用 google gemini API(非私有)

To select a provider change the config.ini:
要选择提供商,请更改 config.ini:

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:5000
</code></span></span></span></span>

is_local: should be True for any locally running LLM, otherwise False.
is_local: 对于任何本地运行的 LLM,都应该为 True,否则为 False。

provider_name: Select the provider to use by it's name, see the provider list above.
provider_name:按名称选择要使用的提供商,请参阅上面的提供商列表。

provider_model: Set the model to use by the agent.
provider_model:设置代理要使用的模型。

provider_server_address: can be set to anything if you are not using the server provider.
provider_server_address:如果您不使用服务器提供程序,则可以设置为任何值。

Known issues 已知问题

Chromedriver Issues Chromedriver 问题

Known error #1: chromedriver mismatch
已知错误 #1: chromedriver 不匹配

Exception: Failed to initialize browser: Message: session not created: This version of ChromeDriver only supports Chrome version 113 Current browser version is 134.0.6998.89 with binary path

This happen if there is a mismatch between your browser and chromedriver version.
如果您的浏览器和 chromedriver 版本不匹配,就会发生这种情况。

You need to navigate to download the latest version:
您需要导航以下载最新版本:

https://developer.chrome.com/docs/chromedriver/downloads

If you're using Chrome version 115 or newer go to:
如果您使用的是 Chrome 115 或更高版本,请转到:

Chrome for Testing availability

And download the chromedriver version matching your OS.
并下载与您的作系统匹配的 chromedriver 版本。

六、连接适配器问题

连接适配器问题

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>Exception: Provider lm-studio failed: HTTP request failed: No connection adapters were found for '127.0.0.1:11434/v1/chat/completions'
</code></span></span></span></span>

Make sure you have http:// in front of the provider IP address :
确保提供商 IP 地址前面有 http://

provider_server_address = http://127.0.0.1:11434

SearxNG base URL must be provided

必须提供 SearxNG 基本 URL

notranslate 复制代码
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>raise ValueError("SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.")
ValueError: SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.
</code></span></span></span></span>

Maybe you didn't move .env.example as .env ? You can also export SEARXNG_BASE_URL:
也许您没有将 .env.example 移动为 .env ?您还可以导出 SEARXNG_BASE_URL:

export SEARXNG_BASE_URL="http://127.0.0.1:8080"

FAQ 常见问题

Q: What hardware do I need?
Q: 我需要什么硬件?

Model Size 模型大小 GPU Comment 评论
7B 8GB Vram 8GB 显存 ⚠️ Not recommended. Performance is poor, frequent hallucinations, and planner agents will likely fail. ⚠️ 不推荐。表现不佳,经常出现幻觉,规划师代理很可能会失败。
14B 12 GB VRAM (e.g. RTX 3060) 12 GB VRAM(例如 RTX 3060) ✅ Usable for simple tasks. May struggle with web browsing and planning tasks. ✅ 可用于简单的任务。可能在 Web 浏览和规划任务方面遇到困难。
32B 24+ GB VRAM (e.g. RTX 4090) 24+ GB VRAM(例如 RTX 4090) 🚀 Success with most tasks, might still struggle with task planning 🚀 成功完成大多数任务,可能仍然难以进行任务规划
70B+ 48+ GB Vram (eg. mac studio) 48+ GB 显存(例如 mac studio) 💪 Excellent. Recommended for advanced use cases. 💪 非常好。推荐用于高级使用案例。

Q: Why Deepseek R1 over other models?
Q: 为什么选择 Deepseek R1 而不是其他型号?

Deepseek R1 excels at reasoning and tool use for its size. We think it's a solid fit for our needs other models work fine, but Deepseek is our primary pick.
Deepseek R1 在推理和工具使用方面表现出色。我们认为它非常适合我们的需求,其他模型运行良好,但 Deepseek 是我们的首选。

Q: I get an error running cli.py. What do I do?
问:我在运行 cli.py 时遇到错误。我该怎么办?

Ensure local is running (ollama serve), your config.ini matches your provider, and dependencies are installed. If none work feel free to raise an issue.
确保 local 正在运行 (ollama serve),您的 config.ini 与您的提供商匹配,并且已安装依赖项。如果没有工作,请随时提出问题。

Q: Can it really run 100% locally?
问:它真的可以 100% 在本地运行吗?

Yes with Ollama, lm-studio or server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI or others API) are optional.
是的,对于 Ollama、lm-studio 或服务器提供商,所有语音转文本、LLM 和文本转语音模型都在本地运行。非本地选项(OpenAI 或其他 API)是可选的。

七、软件下载

夸克网盘分享

本文信息来源于GitHub作者地址:[GitHub - Fosowl/agenticSeek: Fully Local Manus AI. No APIs, No 200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity.](https://github.com/Fosowl/agenticSeek "GitHub - Fosowl/agenticSeek: Fully Local Manus AI. No APIs, No 200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity.")

相关推荐
wei_shuo3 分钟前
GpuGeek 实操指南:So-VITS-SVC 语音合成与 Stable Diffusion 文生图双模型搭建,融合即梦 AI 的深度实践
人工智能·stable diffusion·gpu算力·gpuseek
x-cmd5 分钟前
[250516] OpenAI 升级 ChatGPT:GPT-4.1 及 Mini 版上线!
人工智能·chatgpt·openai·gpt-4.1
2201_754918411 小时前
OpenCV 背景建模详解:从原理到实战
人工智能·opencv·计算机视觉
CopyLower1 小时前
苹果计划将AI搜索集成至Safari:谷歌搜索下降引发的市场变革
前端·人工智能·safari
wd2099881 小时前
2025年Ai写PPT工具推荐,这5款Ai工具可以一键生成专业PPT
人工智能
张飞飞飞飞飞1 小时前
语音识别——声纹识别
人工智能·语音识别
archko2 小时前
语音识别-3,添加ai问答
android·人工智能
Bowen_CV4 小时前
AI 赋能防艾宣传:从创意到实践,我的 IP 形象设计之旅
人工智能·3d建模·豆包·造好物·腾讯混元 3d
你是一个铁憨憨4 小时前
使用深度学习预训练模型检测物体
人工智能·深度学习·arcgis·影像
AI算法工程师Moxi5 小时前
什么时候可以开始学习深度学习?
人工智能·深度学习·学习