本地大模型集成到 LangChain

前置条件:需借助梯子(魔法工具),下方步骤按需安装,若已安装可跳过,建议按博主步骤操作

1. 配置python环境变量

2. 安装transformers

3. 安装dataset

4. 安装tokenizers

5. 安装pytouch

6. pycharm切换conda环境

7. 调用 huggingface在线模型

一、 安装langchain

复制代码
pip install langchain_community
pip install langchain

二、集成到 LangChain

复制代码
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain.llms import HuggingFacePipeline
# 设置具体包含 config.json 的目录

model_dir = r"/Users/taowuhua/Desktop/AI/demo_4/trasnFormers_test/model/uer/gpt2-chinese-cluecorpussmall/models--uer--gpt2-chinese-cluecorpussmall/snapshots/c2c0249d8a2731f269414cc3b22dff021f8e07a3"

# 加载模型和分词器
model = AutoModelForCausalLM.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)

# 使用加载的模型和分词器创建生成文本的 pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer,device="cpu")

# 集成到 LangChain
llm = HuggingFacePipeline(pipeline=generator)
response = llm("请介绍一下人工智能。")
print(response)