Inconsistent Query Results Based on Output Fields Selection in Milvus Dashboard

**题意:**在Milvus仪表盘中基于输出字段选择的不一致查询结果

问题背景:

I'm experiencing an issue with the Milvus dashboard where the search results change based on the selected output fields.

I'm working on a RAG project using text data converted into embeddings, stored in a Milvus collection with around 8000 elements. Last week, my retrieval results matched my expectations ("good" results), however, this week, the results have degraded ("bad" results).

I found that when I exclude the embeddings_vector field from the output fields in the Milvus dashboard, I get the "good" results; Including the embeddings_vector field in the output changes the results to "bad".

I've attached two screenshots showing the difference in the results based on the selected output fields.

Any ideas on what's causing this or how to fix it?

Environment:

Python 3.11 pymilvus 2.3.2 llama_index 0.8.64

Thanks in advance!

python 复制代码
from llama_index.vector_stores import MilvusVectorStore
from llama_index import ServiceContext, VectorStoreIndex

# Some other lines..

# Setup for MilvusVectorStore and query execution
vector_store = MilvusVectorStore(uri=MILVUS_URI,
                                 token=MILVUS_API_KEY,
                                 collection_name=collection_name,
                                 embedding_field='embeddings_vector',
                                 doc_id_field='chunk_id',
                                 similarity_metric='IP',
                                 text_key='chunk_text')

embed_model = get_embeddings()
service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context)
query_engine = index.as_query_engine(similarity_top_k=5, streaming=True)

rag_result = query_engine.query(prompt)

Here is the "good" result: "good" result And here is the "bad" result: "bad" result

问题解决:

I would like to suggest you to follow below considerations.

  • Ensure that your Milvus collection is correctly indexed. Indexing plays a crucial role in how search results are retrieved and ordered. If the index configuration has changed or is not optimized, it might affect the retrieval quality.
  • In your screenshots, the consistency level is set to "Bounded". Try experimenting with different consistency levels (e.g., "Strong" or "Eventually") to see if it impacts the results. Consistency settings can influence the real-time availability of the indexed data.
  • Review the query parameters, especially the similarity_metric. Since you're using IP (Inner Product) as the similarity metric, ensure that your embedding vectors are normalized correctly. Inner Product search works best with normalized vectors.
  • Verify that the embedding vectors are of consistent quality and scale. If there were changes in the embedding model or preprocessing steps, it could lead to variations in the search results.
  • The inclusion of the embeddings_vector field in the output might affect the way Milvus scores and ranks the results. It's possible that returning the raw embeddings affects the internal ranking logic. Ensure that including this field does not inadvertently alter the search behavior.
  • Check the Milvus server logs and performance metrics to identify any anomalies or changes in the search behavior. This might provide insights into why the results differ when the embeddings_vector field is included.
  • Ensure that there are no version mismatches between the client (pymilvus) and the Milvus server. Sometimes, discrepancies between versions can cause unexpected behavior.
  • As a last resort, try modifying your code to exclude the embeddings_vector field programmatically during retrieval and compare the results. This can help isolate whether the issue is indeed caused by including the embeddings in the output.
  • Please try out this code if it helps.
相关推荐
俊哥V1 小时前
每日 AI 研究简报 · 2026-04-14
人工智能·ai
刘佬GEO1 小时前
本地门店做 GEO 的起步顺序:第一步先做什么?
大数据·网络·人工智能·搜索引擎·ai
ofoxcoding2 小时前
GPT-5.4 mini API 实测:和 Claude 4.6、DeepSeek V3、Qwen 3 打了一圈,结果出乎意料
gpt·ai
昆曲之源_娄江河畔2 小时前
婴儿版GPT
python·gpt·ai·transformer
Ai.den2 小时前
Windows 安装 DeerFlow 2.0
人工智能·windows·python·ai
淮北4943 小时前
如何制作ppt(进行中)
ai·powerpoint·逻辑·版式
前端摸鱼匠3 小时前
【AI大模型春招面试题20】大模型训练中优化器(AdamW、SGD、RMSProp)的选择依据?
人工智能·ai·语言模型·面试·大模型·求职招聘
蓝耘智算3 小时前
Token经济学:读懂AI时代的“新石油”
大数据·人工智能·ai·token·蓝耘
weitingfu5 小时前
Excel VBA 入门到精通(十):实战项目——自动化报表系统开发
ai·信息可视化·自动化·excel·vba·office·报表系统