LangChain Demo | 如何调用stackoverflow并结合ReAct回答代码相关问题

背景

楼主决定提升与LLM交互的质量,之前是直接prompt->answer的范式,现在我希望能用上ReAct策略和能够检索StackOverflow,让同一款LLM发挥出更大的作用。

难点

  1. 怎样调用StackOverflow

step1 pip install stackspi

step 2

python 复制代码
from langchain.agents import load_tools

tools = load_tools(
    ["stackexchange"],
    llm=llm
)

注:stackoverflow是stackexchange的子网站

  1. 交互次数太多token输入超出了llm限制

approach 1 使用ConversationSummaryBufferMemory

这种记忆方式会把之前的对话内容总结一下,限制在设定的token个数内

python 复制代码
from langchain.memory import ConversationSummaryBufferMemory

memory = ConversationSummaryBufferMemory(
    llm = llm, # 这里的llm的作用是总结
    max_token_limit=4097,
    memory_key="chat_history"
)

approach 2 设置参数max_iterations

python 复制代码
agent = ZeroShotAgent(
    llm_chain=llm_chain, 
    tools=tools, 
    max_iterations=4, # 限制最大交互次数,防止token超过上限
    verbose=True
)
  1. llm总是回复无法回答

很多教程把温度设置成0,说是为了得到最准确的答案,但是我发现这样设置,agent会变得特别谨慎,直接说它不知道,温度调高以后问题解决了。

测试问题

What parts does a JUnit4 unit test case consist of?

代码

python 复制代码
from constants import PROXY_URL,KEY

import warnings
warnings.filterwarnings("ignore")

import langchain
langchain.debug = True

from langchain.agents import load_tools
from langchain.chat_models import ChatOpenAI

from langchain.agents import AgentExecutor, ZeroShotAgent
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryBufferMemory

llm = ChatOpenAI(
    temperature=0.7, # 如果参数调得很低,会导致LLM特别谨慎,最后不给答案
    model_name="gpt-3.5-turbo-0613", 
    openai_api_key=KEY,
    openai_api_base=PROXY_URL
)

memory = ConversationSummaryBufferMemory(
    llm = llm, # 这里的llm的作用是总结
    max_token_limit=4097,
    memory_key="chat_history"
)

prefix = """You should be a proficient and helpful assistant in java unit testing with JUnit4 framework. You have access to the following tools:"""
suffix = """Begin!"

{chat_history}
Question: {input}
{agent_scratchpad}"""

tools = load_tools(
    ["stackexchange"],
    llm=llm
)

prompt = ZeroShotAgent.create_prompt(
    tools,
    prefix=prefix,
    suffix=suffix,
    input_variables=["input", "chat_history", "agent_scratchpad"],
) # 这里集成了ReAct

llm_chain = LLMChain(llm=llm, prompt=prompt)

agent = ZeroShotAgent(
    llm_chain=llm_chain, 
    tools=tools, 
    max_iterations=4, # 限制最大交互次数,防止token超过上限
    verbose=True
)

agent_chain = AgentExecutor.from_agent_and_tools(
    agent=agent, 
    tools=tools, 
    verbose=True, 
    memory=memory
)

def ask_agent(question):
    answer = agent_chain.run(input=question)
    return answer

def main():
    test_question = "What parts does a JUnit4 unit test case consist of?"
    test_answer = ask_agent(test_question)
    return test_answer

if __name__ == "__main__":
    main()

最后输出

[chain/end] [1:chain:AgentExecutor] [75.12s] Exiting Chain run with output:

{

"output": "A JUnit4 unit test case consists of the following parts:\n1.

Test class: This is a class that contains the test methods.\n2. Test methods: These are the methods that contain the actual test code. They are annotated with the @Test annotation.\n3. Assertions: These are used to verify

the expected behavior of the code being tested. JUnit provides various assertion methods for this purpose.\n4. Annotations: JUnit provides several annotations that can be used to configure the test case, such as @Before, @After, @BeforeClass, and @AfterClass.\n\nOverall, a JUnit4 unit test case

is a class that contains test methods with assertions, and can be configured using annotations."

}

相关推荐
背太阳的牧羊人15 小时前
RAG检索中使用一个 长上下文重排序器(Long Context Reorder) 对检索到的文档进行进一步的处理和排序,优化输出顺序
开发语言·人工智能·python·langchain·rag
Java知识技术分享16 小时前
使用LangChain构建第一个ReAct Agent
python·react.js·ai·语言模型·langchain
圆内~搁浅1 天前
langchain本地知识库问答机器人集成本地知识库
数据库·langchain·机器人
Neo很努力3 天前
【deepseek】本地部署+RAG知识库挂载+对话测试
自然语言处理·chatgpt·langchain·aigc·llama
merlin-mm3 天前
langchain应用-agent
langchain
朴拙数科3 天前
Langchain vs. LlamaIndex:哪个在集成MongoDB并分析资产负债表时效果更好?
数据库·mongodb·langchain
power-辰南4 天前
AI Agent架构深度解析:从ReAct到AutoGPT,自主智能体的技术演进与工程实践
人工智能·react.js·架构·ai agent
静静的喝酒7 天前
langchain学习笔记之消息存储在内存中的实现方法
langchain·消息存储·内存存储
Rickie7 天前
DeepSeek-V3 解读:优化效率与规模
langchain·rag·deepseek
少林码僧7 天前
1.5 企业级AI大模型四阶技术全景解析:从Prompt到Pre-training的进化路径
人工智能·gpt·chatgpt·langchain·prompt