langchain 基于ES的数据向量化存储和检索

中文向量化模型候选:

1、sentence-transformers/all-MiniLM-L6-v2 向量维度为384维,支持多种语言。

2、BAAI/bge-m3

3、多语言模型:BAAI/bge-m3 支持的输入长度<=8192

from langchain_community.embeddings import HuggingFaceBgeEmbeddings

model_name = "sentence-transformers/all-MiniLM-L6-v2"

model_kwargs = {"device": "cpu"}

encode_kwargs = {"normalize_embeddings": True}

embeddings = HuggingFaceBgeEmbeddings(

model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs

)

1·、存储源为elasticsearch

from typing import Any, Dict, Iterable

from elasticsearch import Elasticsearch

from elasticsearch.helpers import bulk

from langchain.embeddings import DeterministicFakeEmbedding

from langchain_core.documents import Document

from langchain_core.embeddings import Embeddings

from langchain_elasticsearch import ElasticsearchRetriever

es_url = "http://user:password@localhost:9200"

es_client = Elasticsearch(hosts=[es_url])

es_client.info()

index_name = "test-langchain-retriever"

text_field = "text"

dense_vector_field = "fake_embedding"

num_characters_field = "num_characters"

texts = [

"foo",

"bar",

"world",

"hello world",

"hello",

"foo bar",

"bla bla foo",

]

def create_index(

es_client: Elasticsearch,

index_name: str,

text_field: str,

dense_vector_field: str,

num_characters_field: str,

):

es_client.indices.create(

index=index_name,

mappings={

"properties": {

text_field: {"type": "text"},

dense_vector_field: {"type": "dense_vector"},

num_characters_field: {"type": "integer"},

}

},

)

def index_data(

es_client: Elasticsearch,

index_name: str,

text_field: str,

dense_vector_field: str,

embeddings: Embeddings,

texts: Iterable[str],

refresh: bool = True,

) -> None:

create_index(

es_client, index_name, text_field, dense_vector_field, num_characters_field

)

vectors = embeddings.embed_documents(list(texts))

requests = [

{

"_op_type": "index",

"_index": index_name,

"_id": i,

text_field: text,

dense_vector_field: vector,

num_characters_field: len(text),

}

for i, (text, vector) in enumerate(zip(texts, vectors))

]

bulk(es_client, requests)

if refresh:

es_client.indices.refresh(index=index_name)

index_data(es_client, index_name, text_field, dense_vector_field, embeddings, texts)

2、elasticsearch 向量检索:

es_url = "http://user:password@localhost:9200"

index_name = "test-langchain-retriever"

text_field = "text"

dense_vector_field = "fake_embedding"

num_characters_field = "num_characters"

def gen_dsl(search_query: str) -> Dict:

vector = embeddings.embed_query(search_query) # same embeddings as for indexing

return {

"knn": {

"field": dense_vector_field,

"query_vector": vector,

"k": 5,

"num_candidates": 10,

}

}

vector_retriever = ElasticsearchRetriever.from_es_params(

index_name=index_name,

body_func=vector_query,

content_field=text_field,

url=es_url,

)

vector_retriever.invoke("foo")

说明:简单的向量检索,耗时比较长。

原因:1、直接对全局使用了余弦相似度计算。(cos),未做任何优化

2、返回数据将向量内容全部返回

相关推荐
UIUV2 小时前
RAG技术学习笔记(含实操解析)
javascript·langchain·llm
神秘的猪头7 小时前
🚀 拒绝“一本正经胡说八道”!手把手带你用 LangChain 实现 RAG,打造你的专属 AI 知识库
langchain·llm·openai
栀秋6667 小时前
重塑 AI 交互边界:基于 LangChain 与 MCP 协议的全栈实践
langchain·llm·mcp
大模型真好玩8 小时前
LangChain DeepAgents 速通指南(三)—— 让Agent告别混乱:Tool Selector与Todo List中间件解析
人工智能·langchain·trae
是一碗螺丝粉1 天前
LangChain 链(Chains)完全指南:从线性流程到智能路由
前端·langchain·aigc
前端付豪1 天前
LangChain记忆:通过Memory记住上次的对话细节
人工智能·python·langchain
神秘的猪头1 天前
🔌 给 AI 装上“三头六臂”!实战大模型接入第三方 MCP 全攻略
langchain·llm·mcp
前端付豪2 天前
LangChain链 写一篇完美推文?用SequencialChain链接不同的组件
人工智能·python·langchain
神秘的猪头2 天前
🔌 把 MCP 装进大脑!手把手带你构建能“热插拔”工具的 AI Agent
langchain·llm·mcp
是一碗螺丝粉3 天前
5分钟上手LangChain.js:用DeepSeek给你的App加上AI能力
前端·人工智能·langchain