Inconsistent Query Results Based on Output Fields Selection in Milvus Dashboard

**题意:**在Milvus仪表盘中基于输出字段选择的不一致查询结果

问题背景:

I'm experiencing an issue with the Milvus dashboard where the search results change based on the selected output fields.

I'm working on a RAG project using text data converted into embeddings, stored in a Milvus collection with around 8000 elements. Last week, my retrieval results matched my expectations ("good" results), however, this week, the results have degraded ("bad" results).

I found that when I exclude the embeddings_vector field from the output fields in the Milvus dashboard, I get the "good" results; Including the embeddings_vector field in the output changes the results to "bad".

I've attached two screenshots showing the difference in the results based on the selected output fields.

Any ideas on what's causing this or how to fix it?

Environment:

Python 3.11 pymilvus 2.3.2 llama_index 0.8.64

Thanks in advance!

python 复制代码
from llama_index.vector_stores import MilvusVectorStore
from llama_index import ServiceContext, VectorStoreIndex

# Some other lines..

# Setup for MilvusVectorStore and query execution
vector_store = MilvusVectorStore(uri=MILVUS_URI,
                                 token=MILVUS_API_KEY,
                                 collection_name=collection_name,
                                 embedding_field='embeddings_vector',
                                 doc_id_field='chunk_id',
                                 similarity_metric='IP',
                                 text_key='chunk_text')

embed_model = get_embeddings()
service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context)
query_engine = index.as_query_engine(similarity_top_k=5, streaming=True)

rag_result = query_engine.query(prompt)

Here is the "good" result: "good" result And here is the "bad" result: "bad" result

问题解决:

I would like to suggest you to follow below considerations.

  • Ensure that your Milvus collection is correctly indexed. Indexing plays a crucial role in how search results are retrieved and ordered. If the index configuration has changed or is not optimized, it might affect the retrieval quality.
  • In your screenshots, the consistency level is set to "Bounded". Try experimenting with different consistency levels (e.g., "Strong" or "Eventually") to see if it impacts the results. Consistency settings can influence the real-time availability of the indexed data.
  • Review the query parameters, especially the similarity_metric. Since you're using IP (Inner Product) as the similarity metric, ensure that your embedding vectors are normalized correctly. Inner Product search works best with normalized vectors.
  • Verify that the embedding vectors are of consistent quality and scale. If there were changes in the embedding model or preprocessing steps, it could lead to variations in the search results.
  • The inclusion of the embeddings_vector field in the output might affect the way Milvus scores and ranks the results. It's possible that returning the raw embeddings affects the internal ranking logic. Ensure that including this field does not inadvertently alter the search behavior.
  • Check the Milvus server logs and performance metrics to identify any anomalies or changes in the search behavior. This might provide insights into why the results differ when the embeddings_vector field is included.
  • Ensure that there are no version mismatches between the client (pymilvus) and the Milvus server. Sometimes, discrepancies between versions can cause unexpected behavior.
  • As a last resort, try modifying your code to exclude the embeddings_vector field programmatically during retrieval and compare the results. This can help isolate whether the issue is indeed caused by including the embeddings in the output.
  • Please try out this code if it helps.
相关推荐
梁辰兴2 小时前
百亿美元赌注变数,AI军备竞赛迎来转折点?
人工智能·ai·大模型·openai·英伟达·梁辰兴·ai军备竞赛
Anarkh_Lee3 小时前
【小白也能实现智能问数智能体】使用开源的universal-db-mcp在coze中实现问数 AskDB智能体
数据库·人工智能·ai·开源·ai编程
ahxdyz5 小时前
.NET平台MCP
ai·.net·mcp
阿杰学AI7 小时前
AI核心知识75——大语言模型之MAS (简洁且通俗易懂版)
人工智能·ai·语言模型·自然语言处理·agent·多智能体协作·mas
孪生质数-8 小时前
Windows安装OpenClaw(Clawdbot)教程
ai·npm·skill·minimax·clawdbot·openclaw
RANCE_atttackkk9 小时前
Springboot+langchain4j的RAG检索增强生成
java·开发语言·spring boot·后端·spring·ai·ai编程
CoderJia程序员甲9 小时前
GitHub 热榜项目 - 日榜(2026-01-31)
ai·开源·大模型·github·ai教程
孟秋与你9 小时前
【openclaw】centos9安装oepnclaw教程 解决安装期间的报错
ai
m0_6038887111 小时前
FineInstructions Scaling Synthetic Instructions to Pre-Training Scale
人工智能·深度学习·机器学习·ai·论文速览
爬台阶的蚂蚁11 小时前
RAG概念和使用
ai·rag