如何确保大模型 RAG 生成的信息是基于可靠的数据源?

在不断发展的人工智能 (AI) 领域中,检索增强生成 (RAG) 已成为一种强大的技术。

RAG 弥合了大型语言模型 (LLM) 与外部知识源之间的差距,使 AI 系统能够提供更全面和信息丰富的响应。然而,一个关键因素有时会缺失------透明性。

我们如何能够确定 RAG 系统呈现的信息是基于可靠来源的?

本文介绍了一种引人注目的解决方案:使用结构化生成的带源突出显示的 RAG。这种创新的方法不仅利用了 RAG 检索相关信息的能力,还突出了支持生成答案的具体来源。喜欢本文记得收藏、点赞、关注,希望大模型技术交流的文末加入我们。

理解基础构件

在深入探讨之前,让我们先建立核心概念的基础:

结构化生成:这种技术引导大型语言模型 (LLM) 的输出遵循预定义的结构。可以将其想象为为 LLM 提供一张路线图,确保生成的文本符合特定格式。

带源突出显示的 RAG 的优势

RAG 和结构化生成的整合提供了诸多优点:

  • 增强信任和透明性:突出显示的来源使用户能够评估所呈现信息的可信度。这有助于培养对系统的信任,并使用户能够深入了解支持证据。
  • 改善可解释性:通过明确指出答案背后的来源,系统变得更加透明。用户能够深入了解推理过程,便于调试和进一步探索知识库。
  • 更广泛的适用性:这种方法适用于用户不仅需要答案,还需要理由和清晰的审计记录的场景。它在教育、研究和法律领域尤为有价值。

代码实现

让我们深入了解使用结构化生成的带源突出显示的 RAG。

步骤 I:安装库

python 复制代码
!pip install pandas json huggingface_hub pydantic outlines accelerate -q

步骤 II:导入库

python 复制代码
import pandas as pd
import json
from huggingface_hub import InferenceClient

pd.set_option("display.max_colwidth", None)

repo_id = "meta-llama/Meta-Llama-3-8B-Instruct"

llm_client = InferenceClient(model=repo_id, timeout=120)

步骤 III:提示模型

python 复制代码
RELEVANT_CONTEXT = """
Document:

The weather is really nice in Paris today.
To define a stop sequence in Transformers, you should pass the stop_sequence argument in your pipeline or model.
"""

RAG_PROMPT_TEMPLATE_JSON = """
Answer the user query based on the source documents.

Here are the source documents: {context}

You should provide your answer as a JSON blob, and also provide all relevant short source snippets from the documents on which you directly based your answer, and a confidence score as a float between 0 and 1.
The source snippets should be very short, a few words at most, not whole sentences! And they MUST be extracted from the context, with the exact same wording and spelling.

Your answer should be built as follows, it must contain the "Answer:" and "End of answer." sequences.

Answer:
{{
  "answer": your_answer,
  "confidence_score": your_confidence_score,
  "source_snippets": ["snippet_1", "snippet_2", ...]
}}
End of answer.

Now begin!
Here is the user question: {user_query}.
Answer:
"""

USER_QUERY = "How can I define a stop sequence in Transformers?" 

prompt = RAG_PROMPT_TEMPLATE_JSON.format(context=RELEVANT_CONTEXT, user_query=USER_QUERY)

print(prompt)

输出:

text 复制代码
Answer the user query based on the source documents.

Here are the source documents: 
Document:

The weather is really nice in Paris today.
To define a stop sequence in Transformers, you should pass the stop_sequence argument in your pipeline or model.

You should provide your answer as a JSON blob, and also provide all relevant short source snippets from the documents on which you directly based your answer, and a confidence score as a float between 0 and 1.
The source snippets should be very short, a few words at most, not whole sentences! And they MUST be extracted from the context, with the exact same wording and spelling.

Your answer should be built as follows, it must contain the "Answer:" and "End of answer." sequences.

Answer:
{
  "answer": your_answer,
  "confidence_score": your_confidence_score,
  "source_snippets": ["snippet_1", "snippet_2", ...]
}
End of answer.

Now begin!
Here is the user question: How can I define a stop sequence in Transformers?.
Answer:

继续代码:

python 复制代码
answer = llm_client.text_generation(
    prompt,
    max_new_tokens=1000,
)

answer = answer.split("End of answer.")[0]
print(answer)

输出:

json 复制代码
{
  "answer": "You should pass the stop_sequence argument in your pipeline or model.",
  "confidence_score": 0.9,
  "source_snippets": ["stop_sequence", "pipeline or model"]
}

步骤 IV:受限解码

python 复制代码
from pydantic import BaseModel, confloat, StringConstraints
from typing import List, Annotated

class AnswerWithSnippets(BaseModel):
    answer: Annotated[str, StringConstraints(min_length=10, max_length=100)]
    confidence: Annotated[float, confloat(ge=0.0, le=1.0)]
    source_snippets: List[Annotated[str, StringConstraints(max_length=30)]]

# Using text_generation
answer = llm_client.text_generation(
    prompt,
    grammar={"type": "json", "value": AnswerWithSnippets.schema()},
    max_new_tokens=250,
    temperature=1.6,
    return_full_text=False,
)
print(answer)

# Using post
data = {
    "inputs": prompt,
    "parameters": {
        "temperature": 1.6,
        "return_full_text": False,
        "grammar": {"type": "json", "value": AnswerWithSnippets.schema()},
        "max_new_tokens": 250,
    },
}
answer = json.loads(llm_client.post(json=data))[0]["generated_text"]
print(answer)

输出:

json 复制代码
{
  "answer": "You should pass the stop_sequence argument in your modemÏallerbate hassceneable measles updatedAt原因",
  "confidence": 0.9,
  "source_snippets": ["in Transformers", "stop_sequence argument in your"]
}

{
"answer": "To define a stop sequence in Transformers, you should pass the stop-sequence argument in your...giÃ",  
"confidence": 1,  
"source_snippets": ["seq이야","stration nhiên thị ji是什么hpeldo"]
}

结论

使用结构化生成的带源突出显示的 RAG 代表了 AI 驱动的信息检索领域的重要进步。

通过为用户提供透明且有据可查的答案,这种技术培养了信任,促进了可解释性,并扩大了 RAG 系统在各个领域的适用性。

随着 AI 的不断发展,这种创新方法为用户能够自信地依赖 AI 生成的信息奠定了基础,使他们理解背后的推理和证据。

技术交流&资料

技术要学会分享、交流,不建议闭门造车。一个人可以走的很快、一堆人可以走的更远。

成立了算法面试和技术交流群,相关资料、技术交流&答疑,均可加我们的交流群获取,群友已超过2000人,添加时最好的备注方式为:来源+兴趣方向,方便找到志同道合的朋友。

方式①、微信搜索公众号:机器学习社区,后台回复:加群

方式②、添加微信号:mlc2040,备注:来自CSDN + 技术交流

通俗易懂讲解大模型系列

相关推荐
清梦202026 分钟前
经典问题---跳跃游戏II(贪心算法)
算法·游戏·贪心算法
paixiaoxin37 分钟前
CV-OCR经典论文解读|An Empirical Study of Scaling Law for OCR/OCR 缩放定律的实证研究
人工智能·深度学习·机器学习·生成对抗网络·计算机视觉·ocr·.net
Dream_Snowar1 小时前
速通Python 第四节——函数
开发语言·python·算法
weixin_515202491 小时前
第R3周:RNN-心脏病预测
人工智能·rnn·深度学习
Altair澳汰尔1 小时前
数据分析和AI丨知识图谱,AI革命中数据集成和模型构建的关键推动者
人工智能·算法·机器学习·数据分析·知识图谱
A懿轩A1 小时前
C/C++ 数据结构与算法【栈和队列】 栈+队列详细解析【日常学习,考研必备】带图+详细代码
c语言·数据结构·c++·学习·考研·算法·栈和队列
Python机器学习AI2 小时前
分类模型的预测概率解读:3D概率分布可视化的直观呈现
算法·机器学习·分类
吕小明么2 小时前
OpenAI o3 “震撼” 发布后回归技术本身的审视与进一步思考
人工智能·深度学习·算法·aigc·agi
1 9 J3 小时前
数据结构 C/C++(实验五:图)
c语言·数据结构·c++·学习·算法
程序员shen1616113 小时前
抖音短视频saas矩阵源码系统开发所需掌握的技术
java·前端·数据库·python·算法