langchain 基于ES的数据向量化存储和检索

中文向量化模型候选:

1、sentence-transformers/all-MiniLM-L6-v2 向量维度为384维,支持多种语言。

2、BAAI/bge-m3

3、多语言模型:BAAI/bge-m3 支持的输入长度<=8192

from langchain_community.embeddings import HuggingFaceBgeEmbeddings

model_name = "sentence-transformers/all-MiniLM-L6-v2"

model_kwargs = {"device": "cpu"}

encode_kwargs = {"normalize_embeddings": True}

embeddings = HuggingFaceBgeEmbeddings(

model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs

)

1·、存储源为elasticsearch

from typing import Any, Dict, Iterable

from elasticsearch import Elasticsearch

from elasticsearch.helpers import bulk

from langchain.embeddings import DeterministicFakeEmbedding

from langchain_core.documents import Document

from langchain_core.embeddings import Embeddings

from langchain_elasticsearch import ElasticsearchRetriever

es_url = "http://user:password@localhost:9200"

es_client = Elasticsearch(hosts=[es_url])

es_client.info()

index_name = "test-langchain-retriever"

text_field = "text"

dense_vector_field = "fake_embedding"

num_characters_field = "num_characters"

texts = [

"foo",

"bar",

"world",

"hello world",

"hello",

"foo bar",

"bla bla foo",

]

def create_index(

es_client: Elasticsearch,

index_name: str,

text_field: str,

dense_vector_field: str,

num_characters_field: str,

):

es_client.indices.create(

index=index_name,

mappings={

"properties": {

text_field: {"type": "text"},

dense_vector_field: {"type": "dense_vector"},

num_characters_field: {"type": "integer"},

}

},

)

def index_data(

es_client: Elasticsearch,

index_name: str,

text_field: str,

dense_vector_field: str,

embeddings: Embeddings,

texts: Iterable[str],

refresh: bool = True,

) -> None:

create_index(

es_client, index_name, text_field, dense_vector_field, num_characters_field

)

vectors = embeddings.embed_documents(list(texts))

requests = [

{

"_op_type": "index",

"_index": index_name,

"_id": i,

text_field: text,

dense_vector_field: vector,

num_characters_field: len(text),

}

for i, (text, vector) in enumerate(zip(texts, vectors))

]

bulk(es_client, requests)

if refresh:

es_client.indices.refresh(index=index_name)

index_data(es_client, index_name, text_field, dense_vector_field, embeddings, texts)

2、elasticsearch 向量检索:

es_url = "http://user:password@localhost:9200"

index_name = "test-langchain-retriever"

text_field = "text"

dense_vector_field = "fake_embedding"

num_characters_field = "num_characters"

def gen_dsl(search_query: str) -> Dict:

vector = embeddings.embed_query(search_query) # same embeddings as for indexing

return {

"knn": {

"field": dense_vector_field,

"query_vector": vector,

"k": 5,

"num_candidates": 10,

}

}

vector_retriever = ElasticsearchRetriever.from_es_params(

index_name=index_name,

body_func=vector_query,

content_field=text_field,

url=es_url,

)

vector_retriever.invoke("foo")

说明:简单的向量检索,耗时比较长。

原因:1、直接对全局使用了余弦相似度计算。(cos),未做任何优化

2、返回数据将向量内容全部返回

相关推荐
凌虚(失业了求个工作)9 小时前
RAG 示例:使用 langchain、Redis、llama.cpp 构建一个 kubernetes 知识库问答
人工智能·redis·python·langchain·llama
我爱学Python!1 天前
大语言模型与图结构的融合: 推荐系统中的新兴范式
人工智能·语言模型·自然语言处理·langchain·llm·大语言模型·推荐系统
ZHOU_WUYI2 天前
3.langchain中的prompt模板 (few shot examples in chat models)
人工智能·langchain·prompt
AI_小站2 天前
RAG 示例:使用 langchain、Redis、llama.cpp 构建一个 kubernetes 知识库问答
人工智能·程序人生·langchain·kubernetes·llama·知识库·rag
ZHOU_WUYI4 天前
5.tree of thought 源码 (prompts 类)
langchain
waiting不是违停4 天前
MetaGPT实现多动作Agent
langchain·llm
ZHOU_WUYI7 天前
2. langgraph中的react agent使用 (在react agent添加历史消息)
人工智能·langchain
ZHOU_WUYI7 天前
4. langgraph中的react agent使用 (在react agent添加人机交互)
人工智能·langchain
ZHOU_WUYI8 天前
5. langgraph中的react agent使用 (从零构建一个react agent)
人工智能·langchain
ZHOU_WUYI8 天前
3. langgraph中的react agent使用 (在react agent添加系统提示)
人工智能·langchain