llama-factory || AutoDL平台 ||启动web界面

报错如下:

bash 复制代码
root@autodl-container-d83e478b47-3def8c49:~/LLaMA-Factory# llamafactory-cli webui
* Running on local URL:  http://0.0.0.0:7860

Could not create share link. Missing file: /root/miniconda3/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.3. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.3/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.3
3. Move the file to this location: /root/miniconda3/lib/python3.10/site-packages/gradio

解决办法:

bash 复制代码
cp frpc_linux_amd64_v0.3 /root/miniconda3/lib/python3.10/site-packages/gradio
bash 复制代码
cd /root/miniconda3/lib/python3.10/site-packages/gradio

提高权限

bash 复制代码
 chmod +x frpc_linux_amd64_v0.3

重新执行

bash 复制代码
llamafactory-cli webui
bash 复制代码
 CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat   --model_name_or_path /root/autodl-fs/Qwen2.5-1.5B-Instruct   --template qwen

结果:

bash 复制代码
[INFO|2025-03-02 23:15:05] llamafactory.model.model_utils.attention:157 >> Using torch SDPA for faster training and inference.
[INFO|2025-03-02 23:15:05] llamafactory.model.loader:157 >> all params: 1,543,714,304
* Running on local URL:  http://0.0.0.0:7860
* Running on public URL: https://35d22b023607f1702a.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)

下面的链接就是可访问的动态链接

相关推荐
辣大辣条6 小时前
LLAMA-Factory Qwen3-1.7b模型微调
llama
我狸才不是赔钱货11 小时前
AI大模型“战国策”:主流LLM平台简单介绍
c++·人工智能·程序人生·github·llama
临街的小孩2 天前
Docker 容器访问宿主机 Ollama 服务配置教程
llama·argflow
鸿蒙小白龙2 天前
OpenHarmony平台大语言模型本地推理:llama深度适配与部署技术详解
人工智能·语言模型·harmonyos·鸿蒙·鸿蒙系统·llama·open harmony
AI大模型5 天前
轻松搞定百个大模型微调!LLaMA-Factory:你的AI模型量产神器
程序员·llm·llama
fly五行9 天前
大模型基础入门与 RAG 实战:从理论到 llama-index 项目搭建(有具体代码示例)
python·ai·llama·llamaindex
德育处主任Pro13 天前
前端玩转大模型,DeepSeek-R1 蒸馏 Llama 模型的 Bedrock 部署
前端·llama
relis14 天前
AVX-512深度实现分析:从原理到LLaMA.cpp的性能优化艺术
性能优化·llama
relis15 天前
llama.cpp RMSNorm CUDA 优化分析报告
算法·llama
云雾J视界15 天前
开源革命下的研发突围:Meta Llama系列模型的知识整合实践与启示
meta·开源·llama·知识管理·知识整合·知识迭代·知识共享