Open yesyue opened 3 weeks ago
The title and description of this issue contains Chinese. Please use English to describe your issue.
Referring to the Sizing Tools, allocate Data Nodes with 2 cores of 8 GB x 2 pods. However, during actual operation, the Data Nodes was an OOM, and after expansion, the memory usage reached 40G.
datanode log:
@yesyue please share more info about how you using milvus, e.g. what kinds of requests did you call to milvus, how many, and how frequency of them? also please help all the milvus pods logs for invesgitaion.
/assign @yesyue /unassign
100 Million/day entites write to milvus
100 Million/day entites write to milvus
after I inserted 10M entites total, then milvus docker stop and crash. I use IVF_SQ8 index, installed milvus with gpu. I use batch insert 10000 (only insert if enough 10000 entities.
after crash I can't connect to connection again and can't use anything. Any solution?
100 Million/day entites write to milvus
after I inserted 10M entites total, then milvus docker stop and crash. I use IVF_SQ8 index, installed milvus with gpu. I use batch insert 10000 (only insert if enough 10000 entities.
after crash I can't connect to connection again and can't use anything. Any solution?
how much gpu memory do you have? please open another issue with detailed logs so we can help
1.could you offer log for datanode?
I saw you in many issues and we'd like to offer help. feel free to contact me at xiaofan.luan@zilliz.com if necessary
Is there an existing issue for this?
Environment
Current Behavior
参考sizing tools 分配Data Node , 2 core 8 GB x 2pods , 实际运行出现OOM , 扩容后内存占用达40G
Expected Behavior
参考sizing tools 分配Data Node , 2 core 8 GB x 2pods , 实际运行出现OOM , 扩容后内存占用达40G
Steps To Reproduce
Milvus Log
No response
Anything else?
No response