LangChain Demo | 如何调用stackoverflow并结合ReAct回答代码相关问题

背景

楼主决定提升与LLM交互的质量,之前是直接prompt->answer的范式,现在我希望能用上ReAct策略和能够检索StackOverflow,让同一款LLM发挥出更大的作用。

难点

  1. 怎样调用StackOverflow

step1 pip install stackspi

step 2

python 复制代码
from langchain.agents import load_tools

tools = load_tools(
    ["stackexchange"],
    llm=llm
)

注:stackoverflow是stackexchange的子网站

  1. 交互次数太多token输入超出了llm限制

approach 1 使用ConversationSummaryBufferMemory

这种记忆方式会把之前的对话内容总结一下,限制在设定的token个数内

python 复制代码
from langchain.memory import ConversationSummaryBufferMemory

memory = ConversationSummaryBufferMemory(
    llm = llm, # 这里的llm的作用是总结
    max_token_limit=4097,
    memory_key="chat_history"
)

approach 2 设置参数max_iterations

python 复制代码
agent = ZeroShotAgent(
    llm_chain=llm_chain, 
    tools=tools, 
    max_iterations=4, # 限制最大交互次数,防止token超过上限
    verbose=True
)
  1. llm总是回复无法回答

很多教程把温度设置成0,说是为了得到最准确的答案,但是我发现这样设置,agent会变得特别谨慎,直接说它不知道,温度调高以后问题解决了。

测试问题

What parts does a JUnit4 unit test case consist of?

代码

python 复制代码
from constants import PROXY_URL,KEY

import warnings
warnings.filterwarnings("ignore")

import langchain
langchain.debug = True

from langchain.agents import load_tools
from langchain.chat_models import ChatOpenAI

from langchain.agents import AgentExecutor, ZeroShotAgent
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryBufferMemory

llm = ChatOpenAI(
    temperature=0.7, # 如果参数调得很低,会导致LLM特别谨慎,最后不给答案
    model_name="gpt-3.5-turbo-0613", 
    openai_api_key=KEY,
    openai_api_base=PROXY_URL
)

memory = ConversationSummaryBufferMemory(
    llm = llm, # 这里的llm的作用是总结
    max_token_limit=4097,
    memory_key="chat_history"
)

prefix = """You should be a proficient and helpful assistant in java unit testing with JUnit4 framework. You have access to the following tools:"""
suffix = """Begin!"

{chat_history}
Question: {input}
{agent_scratchpad}"""

tools = load_tools(
    ["stackexchange"],
    llm=llm
)

prompt = ZeroShotAgent.create_prompt(
    tools,
    prefix=prefix,
    suffix=suffix,
    input_variables=["input", "chat_history", "agent_scratchpad"],
) # 这里集成了ReAct

llm_chain = LLMChain(llm=llm, prompt=prompt)

agent = ZeroShotAgent(
    llm_chain=llm_chain, 
    tools=tools, 
    max_iterations=4, # 限制最大交互次数,防止token超过上限
    verbose=True
)

agent_chain = AgentExecutor.from_agent_and_tools(
    agent=agent, 
    tools=tools, 
    verbose=True, 
    memory=memory
)

def ask_agent(question):
    answer = agent_chain.run(input=question)
    return answer

def main():
    test_question = "What parts does a JUnit4 unit test case consist of?"
    test_answer = ask_agent(test_question)
    return test_answer

if __name__ == "__main__":
    main()

最后输出

chain/end\] \[1:chain:AgentExecutor\] \[75.12s\] Exiting Chain run with output: { "output": "A JUnit4 unit test case consists of the following parts:\\n1. Test class: This is a class that contains the test methods.\\n2. Test methods: These are the methods that contain the actual test code. They are annotated with the @Test annotation.\\n3. Assertions: These are used to verify the expected behavior of the code being tested. JUnit provides various assertion methods for this purpose.\\n4. Annotations: JUnit provides several annotations that can be used to configure the test case, such as @Before, @After, @BeforeClass, and @AfterClass.\\n\\nOverall, a JUnit4 unit test case is a class that contains test methods with assertions, and can be configured using annotations."

相关推荐
W_Meng_H2 小时前
LangChain Agent - 火山引擎大模型推理与回答的完整流式输出
langchain·火山引擎
Light605 小时前
再见,REST API?你好,MCP Server!AI时代后端开发新范式
java·人工智能·rest api·ai agent·spring ai·mcp
Jack___Xue9 小时前
LangChain实战快速入门笔记(三)--LangChain使用之Chains
人工智能·笔记·langchain
Elwin Wong10 小时前
将你的LangChian Agent可视化
人工智能·langchain·agent
工藤学编程10 小时前
零基础学AI大模型之LangChain核心:Runnable接口底层实现
人工智能·langchain
新知图书10 小时前
智能体的自适应学习
人工智能·ai agent·智能体·大模型应用开发·大模型应用
Jack___Xue1 天前
LangChain实战快速入门笔记(二)--LangChain使用之Model I/O
笔记·langchain
AI大模型1 天前
【连载】零基础跟我学做AI Agent(第3课:用LangChain开发一个做题Agent)
langchain·llm·agent
micro_cloud_fly1 天前
langchain langgraph历史会话的 json序列化
python·langchain·json
rockingdingo1 天前
利用 OneKey MCP Router Python SDK构建多领域大模型Function Call多工具调用数据集
网络·windows·python·ai agent·mcp