
🤖 介绍
在前两部分中,我们介绍了如何使用 Ollama 、Next.js 和不同包集成来本地设置 AI 助理。在本文中,我们将深入探讨如何使用 RAG(检索增强生成) 和 LangChain 、Ollama 以及 Pinecone 构建基于知识库的 AI 助理。
我们将详细介绍:
- 加载和预处理文档
- 将文档拆分并嵌入向量空间
- 在 Pinecone 中存储嵌入向量
- 查询这些向量以实现智能检索
🔧 使用工具
📘 什么是 RAG?
RAG 检索增强生成。它是一种结合了两种方法的混合 AI 方法,以提高响应的准确性:
- 检索:从知识库中搜索相关文档或片段。
- 生成:使用语言模型(如 Gemma 或 LLaMA)基于检索到的内容生成响应。
🔁 流程概要
- 加载 文件(PDF、DOCX、TXT)
- 拆分 成可读片段
- 嵌入 这些片段使用向量表示
- 存储 在 Pinecone 中
- 查询 Pinecone 并根据用户输入生成上下文相关答案
你可以在 LangChain 文档中阅读更多相关内容:js.langchain.com/docs/tutori...
🧩 关键包和文档
包 | 用途 | 文档 |
---|---|---|
langchain |
LLM 与工具链式集成框架 | 文档 |
@pinecone-database/pinecone |
Pinecone 客户端 | 文档 |
@langchain/pinecone |
LangChain-Pinecone 集成 | 文档 |
@langchain/community/embeddings/ollama |
Ollama 为 LangChain 提供嵌入 | 文档 |
pdf-parse 、mammoth |
用于加载和读取 PDF、DOCX 和 TXT | pdf-parse、mammoth |
🧰 工具设置概览
🔧 1. 设置 Pinecone
- 在 Pinecone 上创建账号: app.pinecone.io/?sessionTyp...
- 创建一个 索引,设置如下:
-
- 名称 :例如
database_name
- 向量类型 :
Dense
- 维度 :
1024
(必须与mxbai-embed-large
匹配) - 度量 :
Cosine
- 环境 :
us-east-1-aws
- 名称 :例如
你可以选择现有模型,也可使用自定义设置与项目中使用的模型一致,比如我选用了 mxbai-embed-large
。
🛠 2. 配置 .env
在 .env.local
中添加以下内容:
ini
PINECONE_API_KEY=your-api-key
PINECONE_INDEX_NAME=database_name
PINECONE_ENVIRONMENT=us-east-1-aws
OLLAMA_MODEL=gemma3:1b
🚀 3. 启动 Ollama 和模型
确保已安装 Ollama 并在终端中运行以下命令启动该模型:
ollama run gemma3:1b
通过以下命令安装嵌入模型:
ollama pull mxbai-embed-large
LangChain 将通过以下方式本地引用该模型:
arduino
new OllamaEmbeddings({
model: 'mxbai-embed-large',
baseUrl: 'http://localhost:11434'
});
注意 :你可以在 js.langchain.com/docs/integr... 和 ollama.com/search 中查看更多模型,也可以在 js.langchain.com/docs/integr... 中探索其他嵌入模型。
🧪 工作原理 --- 步骤详解
下面我将详细说明我们要实现的目标,以及相应的代码片段。
第 1 步:上传和处理文档
- 用户上传 .pdf、.docx 或 .txt 文件。
- 使用 langchain 加载器加载文件。
- 使用 RecursiveCharacterTextSplitter 将文本拆分成片段。
- 返回 LangChain 文档对象数组。
第 2 步:嵌入并存储到 Pinecone 中
- 使用
mxbai-embed-large
通过 OllamaEmbeddings 对片段进行嵌入。 - 将向量存储在 Pinecone 索引下的命名空间中。
第 3 步:查询上下文
- 当用户输入问题时,机型相似性搜索。
- 从 Pinecone 中检索相关片段。
- 将片段组合成上下文块。
- 将上下文作为系统消息注入到 LLM 的提示中。
typescript
utils/documentProcessing.ts
import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
import { Document } from '@langchain/core/documents';
import { PineconeStore } from '@langchain/pinecone';
import { Pinecone } from '@pinecone-database/pinecone';
import { DocxLoader } from 'langchain/document_loaders/fs/docx';
import { PDFLoader } from 'langchain/document_loaders/fs/pdf';
import { TextLoader } from 'langchain/document_loaders/fs/text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
const pinecone = new Pinecone({ apiKey: process.env.PINECONE_API_KEY! });
const embeddings = new OllamaEmbeddings({ model: 'mxbai-embed-large', baseUrl: 'http://localhost:11434' });
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });
export async function processDocument(file: File | Blob, fileName: string): Promise<Document[]> {
let documents: Document[];
if (fileName.endsWith('.pdf')) documents = await new PDFLoader(file).load();
else if (fileName.endsWith('.docx')) documents = await new DocxLoader(file).load();
else if (fileName.endsWith('.txt')) documents = await new TextLoader(file).load();
else throw new Error('Unsupported file type');
return await textSplitter.splitDocuments(documents);
}
export async function storeDocuments(documents: Document[]): Promise<void> {
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);
await PineconeStore.fromDocuments(documents, embeddings, {
pineconeIndex,
maxConcurrency: 5,
namespace: 'your_namespace', //可选
});
}
export async function queryDocuments(query: string): Promise<Document[]> {
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);
const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
pineconeIndex,
maxConcurrency: 5,
namespace: 'your_namespace', //可选
});
return await vectorStore.similaritySearch(query, 4);
}
typescript
api/chat/upload/route.ts
import { processDocument, storeDocuments } from '@/utils/documentProcessing';
import { NextResponse } from 'next/server';
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file') as File;
if (!file) return NextResponse.json({ error: 'No file provided' }, { status: 400 });
const documents = await processDocument(file, file.name);
await storeDocuments(documents);
return NextResponse.json({
message: 'Document processed and stored successfully',
fileName: file.name,
documentCount: documents.length
});
}
typescript
api/chat/route.ts
import { queryDocuments } from '@/utils/documentProcessing';
import { Message, streamText } from 'ai';
import { NextRequest } from 'next/server';
import { createOllama } from 'ollama-ai-provider';
const ollama = createOllama();
const MODEL_NAME = process.env.OLLAMA_MODEL || 'gemma3:1b';
export async function POST(req: NextRequest) {
const { messages } = await req.json();
const lastMessage = messages[messages.length - 1];
const relevantDocs = await queryDocuments(lastMessage.content);
const context = relevantDocs.map((doc) => doc.pageContent).join('\n\n');
const systemMessage: Message = {
id: 'system',
role: 'system',
content: `You are a helpful AI assistant with access to a knowledge base.
Use the following context to answer the user's questions:\n\n${context}`,
};
const promptMessages = [systemMessage, ...messages];
const result = await streamText({
model: ollama(MODEL_NAME),
messages: promptMessages
});
return result.toDataStreamResponse();
}
以下是 UI 部分的代码片段
tsx
ChatInput.tsx
'use client'
interface ChatInputProps {
input: string;
handleInputChange: (e: React.ChangeEvent<HTMLTextAreaElement>) => void;
handleSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
isLoading: boolean;
}
export default function ChatInput({ input, handleInputChange, handleSubmit, isLoading }: ChatInputProps) {
return (
<form onSubmit={handleSubmit} className="flex gap-4">
<textarea
value={input}
onChange={handleInputChange}
placeholder="Ask a question about the documents..."
className="flex-1 p-4 border border-gray-200 dark:border-gray-700 rounded-xl
bg-white dark:bg-gray-800
placeholder-gray-400 dark:placeholder-gray-500
focus:outline-none focus:ring-2 focus:ring-blue-500 dark:focus:ring-blue-400
resize-none min-h-[50px] max-h-32
text-gray-700 dark:text-gray-200"
rows={1}
required
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading}
className={`px-6 py-2 rounded-xl font-medium transition-all duration-200
${isLoading
? 'bg-gray-100 dark:bg-gray-700 text-gray-400 dark:text-gray-500 cursor-not-allowed'
: 'bg-blue-500 hover:bg-blue-600 active:bg-blue-700 text-white shadow-sm hover:shadow'
}`}
>
{isLoading ? (
<span className="flex items-center gap-2">
<svg className="animate-spin h-4 w-4" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4" fill="none"/>
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"/>
</svg>
Processing
</span>
) : 'Send'}
</button>
</form>
);
}
tsx
ChatMessage.tsx
'use client'
import { Message } from 'ai';
import ReactMarkdown from 'react-markdown';
interface ChatMessageProps {
message: Message;
}
export default function ChatMessage({ message }: ChatMessageProps) {
return (
<div
className={`flex items-start gap-4 p-6 rounded-2xl shadow-sm transition-colors ${
message.role === 'assistant'
? 'bg-white dark:bg-gray-800 border border-gray-100 dark:border-gray-700'
: 'bg-blue-50 dark:bg-blue-900/30 border border-blue-100 dark:border-blue-800'
}`}
>
<div className={`w-8 h-8 rounded-full flex items-center justify-center flex-shrink-0 ${
message.role === 'assistant'
? 'bg-purple-100 text-purple-600 dark:bg-purple-900 dark:text-purple-300'
: 'bg-blue-100 text-blue-600 dark:bg-blue-900 dark:text-blue-300'
}`}>
{message.role === 'assistant' ? '🤖' : '👤'}
</div>
<div className="flex-1 min-w-0">
<div className="font-medium text-sm mb-2 text-gray-700 dark:text-gray-300">
{message.role === 'assistant' ? 'AI Assistant' : 'You'}
</div>
<div className="prose dark:prose-invert prose-sm max-w-none">
<ReactMarkdown>{message.content}</ReactMarkdown>
</div>
</div>
</div>
);
}
tsx
FileUpload.tsx
"use client"
import React, { useState } from 'react';
export default function FileUpload() {
const [isUploading, setIsUploading] = useState(false);
const [message, setMessage] = useState('');
const [error, setError] = useState('');
const handleFileUpload = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file) return;
// Reset states
setMessage('');
setError('');
setIsUploading(true);
try {
const formData = new FormData();
formData.append('file', file);
const response = await fetch('/api/chat/upload', {
method: 'POST',
body: formData,
});
const data = await response.json();
if (!response.ok) {
throw new Error(data.error || 'Error uploading file');
}
setMessage(`Successfully uploaded ${file.name}`);
} catch (err) {
setError(err instanceof Error ? err.message : 'Error uploading file');
} finally {
setIsUploading(false);
}
};
return (
<div className="mb-6">
<div className="flex flex-col sm:flex-row items-center gap-4">
<label
className={`flex items-center gap-2 px-6 py-3 rounded-xl border-2 border-dashed
transition-all duration-200 cursor-pointer
${isUploading
? 'border-gray-300 bg-gray-50 dark:border-gray-700 dark:bg-gray-800/50'
: 'border-blue-300 hover:border-blue-400 hover:bg-blue-50 dark:border-blue-700 dark:hover:border-blue-600 dark:hover:bg-blue-900/30'
}`}
>
<svg
className={`w-5 h-5 ${isUploading ? 'text-gray-400' : 'text-blue-500'}`}
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
>
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M4 16v1a3 3 0 003 3h10a3 3 0 003-3v-1m-4-8l-4-4m0 0L8 8m4-4v12" />
</svg>
<span className={`font-medium ${isUploading ? 'text-gray-400' : 'text-blue-500'}`}>
{isUploading ? 'Uploading...' : 'Upload Document'}
</span>
<input
type="file"
className="hidden"
accept=".pdf,.docx"
onChange={handleFileUpload}
disabled={isUploading}
/>
</label>
<span className="text-sm text-gray-500 dark:text-gray-400 flex items-center gap-2">
<svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
Supported: PDF, DOCX
</span>
</div>
{message && (
<div className="mt-4 p-4 bg-green-50 dark:bg-green-900/30 rounded-xl border border-green-100 dark:border-green-800">
<p className="text-sm text-green-600 dark:text-green-400 flex items-center gap-2">
<svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M5 13l4 4L19 7" />
</svg>
{message}
</p>
</div>
)}
{error && (
<div className="mt-4 p-4 bg-red-50 dark:bg-red-900/30 rounded-xl border border-red-100 dark:border-red-800">
<p className="text-sm text-red-600 dark:text-red-400 flex items-center gap-2">
<svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
{error}
</p>
</div>
)}
</div>
);
}
tsx
ChatPage.tsx
"use client"
import { useChat } from 'ai/react';
import ChatInput from './ChatInput';
import ChatMessage from './ChatMessage';
import FileUpload from './FileUpload';
export default function ChatPage() {
const { input, messages, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
onError: (error) => {
console.error('Chat error:', error);
alert('Error: ' + error.message);
}
});
return (
<div className="flex flex-col h-screen bg-gray-50 dark:bg-gray-900">
<div className="flex-1 max-w-5xl mx-auto w-full p-4 md:p-6 lg:p-8">
<div className="flex-1 overflow-y-auto mb-4 space-y-6">
<h1 className="text-3xl font-bold text-gray-900 dark:text-white text-center mb-8">
RAG-Powered Knowledge Base Chat
</h1>
<div className="bg-white dark:bg-gray-800 rounded-xl shadow-lg p-6">
<FileUpload />
</div>
<div className="space-y-6">
{messages.map((message) => (
<ChatMessage key={message.id} message={message} />
))}
</div>
</div>
<div className="sticky bottom-0 bg-white dark:bg-gray-800 rounded-xl shadow-lg p-4">
<ChatInput
input={input}
handleInputChange={handleInputChange}
handleSubmit={handleSubmit}
isLoading={isLoading}
/>
</div>
</div>
</div>
);
}
好啦!现在你可以运行代码了
npm run dev
点击 "上传文档" 按钮上传你想要存储的文档。上传成功后,你的 Pinecone 仪表盘将如下图所示:

成功加载文档后,你可以向 AI 助理询问与文档内容相关的问题,并获取正确回答。以下是我的测试截图:
