Open NicoYuan1986 opened 12 hours ago
- Milvus version: 19572f5 - Deployment mode(standalone or cluster):cluster - MQ type(rocksmq, pulsar or kafka): kafka - SDK version(e.g. pymilvus v2.0.0rc2): - OS(Ubuntu or CentOS): - CPU/Memory: - GPU: - Others:
bulk insert fail frequently for timeout when insert all fields with numpy. link: https://jenkins.milvus.io:18080/blue/organizations/jenkins/Milvus%20Nightly%20CI(new)/detail/master/189/pipeline/138
[pytest : test] # import data [pytest : test] t0 = time.time() [pytest : test] task_id, _ = self.utility_wrap.do_bulk_insert( [pytest : test] collection_name=c_name, files=files [pytest : test] ) [pytest : test] logging.info(f"bulk insert task ids:{task_id}") [pytest : test] > success, states = self.utility_wrap.wait_for_bulk_insert_tasks_completed( [pytest : test] task_ids=[task_id], timeout=300 [pytest : test] )
timeout is 300s, 2000 entities.
pass
No response
/assign @xiaocai2333 /unassign
Is there an existing issue for this?
Environment
Current Behavior
bulk insert fail frequently for timeout when insert all fields with numpy. link: https://jenkins.milvus.io:18080/blue/organizations/jenkins/Milvus%20Nightly%20CI(new)/detail/master/189/pipeline/138
timeout is 300s, 2000 entities.
Expected Behavior
pass
Steps To Reproduce
No response
Milvus Log
No response
Anything else?
No response