Closed wangting0128 closed 1 year ago
/assign @longjiquan
standalone:
cluster:
argo task:fouram-9txhk
test yaml: client-configmap:client-random-locust-hnsw-search-filter-100m-ddl server-configmap:server-single-32c128m
server:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fouram-9txhk-1-etcd-0 1/1 Running 0 102s 10.104.9.100 4am-node14 <none> <none>
fouram-9txhk-1-milvus-standalone-59456db974-ztczl 1/1 Running 0 102s 10.104.5.88 4am-node12 <none> <none>
fouram-9txhk-1-minio-6cd598d9f9-t4qvn 1/1 Running 0 102s 10.104.6.43 4am-node13 <none> <none>
monitor:
tests only with load/query:
tests only with scene_test:
For both two cases, the memory will go to be stable finally.
server-instance fouram-tag-no-clean-ksrxv-1 server-configmap server-single-4c8m client-configmap client-random-locust-search-filter-10w-onddl
master-20221019-52cd40fb 2.2.0.dev42
fouram-tag-no-clean-ksrxv-1-etcd-0 1/1 Running 0 116s 10.104.6.187 4am-node13 <none> <none>
fouram-tag-no-clean-ksrxv-1-milvus-standalone-dd96f57cc-mcknk 1/1 Running 0 116s 10.104.6.186 4am-node13 <none> <none>
fouram-tag-no-clean-ksrxv-1-minio-69888ddbd-vqz2j 1/1 Running 0 116s 10.104.6.185 4am-node13 <none> <none>
memory:
data:
config.yaml: |
locust_random_performance:
collections:
-
collection_name: sift_10w_128_l2
other_fields: float1
ni_per: 50000
build_index: true
index_type: ivf_sq8
index_param:
nlist: 2048
task:
types:
-
type: scene_test
weight: 10
connection_num: 1
clients_num: 20
spawn_rate: 2
during_time: 24h
However, if we don't set cache for rocksdb by SetNoBlockCache(true)
, heaptrack will tell you no memory leakage.
this might not be a issue since rosksdb block cache takes 1 gigas by default?
this might not be a issue since rosksdb block cache takes 1 gigas by default?
Yeah, we'll also set the cache size according to the total memory size & memory usage.
server-instance fouram-vn7sw-1 server-configmap server-single-4c8m client-configmap client-random-locust-search-filter-10w-onddl master-20221025-ec83bbf7 2.2.0.dev63
It has been verified that this memory rise problem still exists, please fix it. @longjiquan
already verified by branch test-no-block-cache
, commit: e39befc
.
/assign @jingkl
After verification, it is proved that the block cache of rocksdb holds the memory. Of course, this behavior matches the expectations and should not be treated as a memory leak problem even not an issue.
This issue has been verified and the behaviour has been judged to be as expected, so close the issue
Is there an existing issue for this?
Environment
Current Behavior
argo task: fouram-2bn8t
client yaml: client-configmap: client-random-locust-search-filter-100m-ddl-6d server-configmap: server-single-32c128m
client pod: fouram-2bn8t-3147776694
server:
Memory increased by 3G in an hour
Expected Behavior
Memory usage remains stable without significant fluctuations
Steps To Reproduce
Milvus Log
No response
Anything else?
client-random-locust-search-filter-100m-ddl-6d: