Open ThreadDao opened 4 months ago
/assign @congqixia /unassign
@XuanYang-cn @congqixia
@XuanYang-cn @congqixia
c.query('id >=0', output_fields=["count(*)"])
data: ["{'count(*)': 9910041}"] ..., extra_info: {'cost': 0}
9910041*128*4/1024/1024/1024
4.725475788116455
c.query('id >=0', output_fields=["count(*)"])
data: ["{'count(*)': 9910041}"] ..., extra_info: {'cost': 0}
c.release()
186/24
7.75
c.query('id >=0', output_fields=["count(*)"])
data: ["{'count(*)': 10292193}"] ..., extra_info: {'cost': 0}
c.query('id >=0', output_fields=["count(*)"])
data: ["{'count(*)': 10292193}"] ..., extra_info: {'cost': 0}
c.query('id >=0', output_fields=["count(*)"], consistency_level="Strong")
data: ["{'count(*)': 10292193}"] ..., extra_info: {'cost': 0}
Time range reference grafana links: metrics of compact-opt-mem second-reload
Count(*) part might related to #33955
@XuanYang-cn @congqixia 2. release-load can fix it. however, reload 9910041 entities costs almost 3hours
Seems like a known issue, when target changes so quickly and load is continuous moving forward, it'll wait for a long time.
@XuanYang-cn any updates for this? any fix to 2.4 branch?
Is this reproducing? /assign @ThreadDao /unassign @XuanYang-cn @congqixia
Is there an existing issue for this?
Environment
Current Behavior
1. deploy milvus with image: milvus-io-2.4-eeba851-20240612
2. test steps
fouram_qb77Q7fh
-> indexupgrade image to 2.4-20240614-5fc1370f-amd64
query failed
Expected Behavior
No response
Steps To Reproduce
Milvus Log
pods:
Anything else?
No response