Closed wangting0128 closed 2 years ago
duplicate with #16041
might be no duplicated, but the reason of growing segment, keep it open
/assign @longjiquan
@wangting0128 please retry with latest master
argo task: benchmark-backup-jqlbs
test yaml: client-configmap:client-random-locust-1m server-configmap:server-cluster-8c32m
server:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
benchmark-backup-jqlbs-1-etcd-0 1/1 Running 0 17m 10.97.16.116 qa-node013.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-etcd-1 1/1 Running 0 17m 10.97.17.47 qa-node014.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-etcd-2 1/1 Running 0 17m 10.97.16.118 qa-node013.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-datacoord-5fbc47f5df-nv86g 1/1 Running 0 17m 10.97.3.79 qa-node001.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-datanode-779ff54b6d-2z2tc 1/1 Running 0 17m 10.97.17.44 qa-node014.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-indexcoord-65c59c474b-wxjfq 1/1 Running 0 17m 10.97.10.155 qa-node008.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-indexnode-7d9c9bcd59-gwlzq 1/1 Running 0 17m 10.97.16.114 qa-node013.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-proxy-5f7d55dc56-p2lm8 1/1 Running 0 17m 10.97.10.154 qa-node008.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-querycoord-869f5c44b9-ptjbb 1/1 Running 0 17m 10.97.10.153 qa-node008.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-querynode-66f99d565c-ksktr 1/1 Running 0 17m 10.97.20.154 qa-node018.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-milvus-rootcoord-589ffffffc-5277j 1/1 Running 0 17m 10.97.10.152 qa-node008.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-minio-0 1/1 Running 0 17m 10.97.12.62 qa-node015.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-minio-1 1/1 Running 0 17m 10.97.19.64 qa-node016.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-minio-2 1/1 Running 0 17m 10.97.19.62 qa-node016.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-minio-3 1/1 Running 0 17m 10.97.12.64 qa-node015.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-bookie-0 1/1 Running 0 17m 10.97.5.64 qa-node003.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-bookie-1 1/1 Running 0 17m 10.97.19.68 qa-node016.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-bookie-2 1/1 Running 0 17m 10.97.18.220 qa-node017.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-bookie-init-dw65t 0/1 Completed 0 17m 10.97.3.78 qa-node001.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-broker-0 1/1 Running 0 17m 10.97.3.77 qa-node001.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-proxy-0 1/1 Running 0 17m 10.97.9.32 qa-node007.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-pulsar-init-h4tdf 0/1 Completed 0 17m 10.97.9.31 qa-node007.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-recovery-0 1/1 Running 0 17m 10.97.12.60 qa-node015.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-zookeeper-0 1/1 Running 0 17m 10.97.9.34 qa-node007.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-zookeeper-1 1/1 Running 0 16m 10.97.11.232 qa-node009.zilliz.local <none> <none>
benchmark-backup-jqlbs-1-pulsar-zookeeper-2 1/1 Running 0 16m 10.97.16.120 qa-node013.zilliz.local <none> <none>
client pod: benchmark-backup-jqlbs-22284265
client data: [1653037552]
@wangting0128 Multithreading is not well-supported by Python GRPC, maybe you can try using multi-process.
The insert latency in Proxy side was stable as below:
I also tested this case using go-sdk, also stable.
According to https://github.com/grpc/grpc/issues/20985, operations on same gRPC connection will share GIL, thus will be influenced by each other. So the latency of other API, such as insert
and load_collection
here will follow as search
's latency.
so this might not be a issue for python SDK? close for now? @longjiquan
so this might not be a issue for python SDK? close for now? @longjiquan
Yes, not an issue for python SDK. Could help to check this? @wangting0128 /assign @wangting0128
Is there an existing issue for this?
Environment
Current Behavior
client pod: benchmark-tag-xwtx2-71121370
client data: [1644808606]
Expected Behavior
argo task: benchmark-tag-xwtx2
test yaml: client-configmap:client-random-locust-1m server-configmap:server-cluster-8c32m
server:
Steps To Reproduce
Anything else?
client-random-locust-1m: