6.本地安装Fingpt

文章目录

1 依赖环境设置

本地环境

  • Python 3.10

  • CUbDA (可选)

  • PyTorch

云环境-魔塔

2.DeepSeek R1模型部署

DeepSeek R1

  • 1.5B
    •CPU: 4核心
    •内存: 8G
    •显卡:非必须
    •场景:聊天机器人,简单对话
  • 7B/8B
    •CPU: 6-8核心
    •内存: 16G
    •显卡:推荐8GB显存,例如RTX 3060/4060
    •场景:文本摘要,翻译,轻量级多轮对话
  • 14B
    •CPU: 12核心
    •内存: 64G
    •显卡:推荐16GB显存,例如RTX 3080/4080
    •场景:复杂任务,长文本理解与生成
  • 32B
    •CPU: 16核心
    •内存: 128G
    •显卡:推荐24GB显存,例如RTX 3090/4090, A100或V100
    •场景:高精度专业任务,医疗/金融/法律咨询
shell 复制代码
Linux环境执行如下命令(官方)
curl-fsSL https://ollama.com/install.sh|sh
魔塔环境
modelscope download--model=modelscope/ollama-linux--local _ dir./ollama
cd ollama-linux
sudo chmod+x./ollama-modelscopeinstall.sh
./ollama-modelscopeinstall.sh
ollama serve
环境参数
OLLAMA_MODELS=自定义路径
多GPU支持
CUDA_VISIBLE_DEVICES=0,1
通过ollama命令拉取模型
ollama run deepseek-r1:7b

3.FinGPT模型部署

安装FinGPT Python依赖包

shell 复制代码
pip install-Utransformers==4.40.1peft==0.5.0
pip install-U sentencepiece
pip install-U accelerate
pip install-U datasets
pip install-U bitsandbytes
如果模型在CPU上运行
pip3 install torch torchvision torchaudio
如果模型在GPU上运行
#确定CUDA版本nvcc- version
输出: Cuda compilation tools, release 12.1, V12.1.66
pip3 install torch torchvision torchaudio-index-url https://download. pytorch. org/ whl/ cu121

验证pytorch安装结果

python 复制代码
import torch
print(torch._version_)
print(torch.cuda.is_available())

运行FinGPT

python 复制代码
import torch
from transformers import  LlamaForCausalLM, LlamaTokenizerFast
from peft import PeftModel  # 0.5.0


base_model = "meta-llama/Meta-Llama-3-8B"
peft_model = "FinGPT/fingpt-mt_llama3-8b_lora"
prompt = [
    """Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
    Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs.
    Answer:""",
    """Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
    Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing.
    Answer:"""
]

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")



model = LlamaForCausalLM.from_pretrained(base_model, trust_remote_code=True, device_map="cuda", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(model, peft_model)
model = model.eval()
model = model.to(device)

tokenizer = LlamaTokenizerFast.from_pretrained(base_model, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokens = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(device)

res =model.generate(**tokens, max_new_tokens=512)
res_sentences = [tokenizer.decode(i) for i in res]
out_text=[o.split("Answer:")[1] for o in res_sentences]


for sentiment in out_text:
    print(sentiment)

4.总结与思考

  1. 如何通过Ollama部署DeepSeek R1模型?
  2. FinGPT不同版本之间的差异是什么?
  3. FinGPT使用哪种技术调优基础模型?
  4. 如何设计Prompt指导FinGPT完成情感分析任务?
相关推荐
The_Ticker18 分钟前
印度股票实时行情API(低成本方案)
python·websocket·算法·金融·区块链
ZC跨境爬虫25 分钟前
Scrapy工作空间搭建与目录结构解析:从初始化到基础配置全流程
前端·爬虫·python·scrapy·自动化
EAIReport28 分钟前
国外网站数据批量采集技术实现路径
开发语言·python
Ulyanov32 分钟前
基于ttk的现代化Python音视频播放器:UI设计与可视化技术深度解析
python·ui·音视频
Freak嵌入式40 分钟前
MicroPython LVGL基础知识和概念:时序与动态效果
开发语言·python·github·php·gui·lvgl·micropython
zhangzeyuaaa1 小时前
Python 中的 Map 和 Reduce 详解
开发语言·python
七夜zippoe2 小时前
Java技术未来展望:GraalVM、Quarkus、Helidon等新趋势探讨
java·开发语言·python·quarkus·graaivm·helidon
m0_738120722 小时前
网络安全编程——Python编写基于UDP的主机发现工具(解码IP header)
python·网络协议·tcp/ip·安全·web安全·udp
北冥有羽Victoria2 小时前
OpenCLI 操作网页 从0到1完整实操指南
vscode·爬虫·python·github·api·ai编程·opencli
handsomestWei2 小时前
scikit-learn数据预处理模块
python·机器学习·scikit-learn