Open ThreadDao opened 2 days ago
/assign @XuanYang-cn Please help investigate
Acutually, L0 compaction executes too fast that compaction task num metrics add and sub one in 30s. It's not shown in the compaction task num metrics, but latency and logs can prove that
Triggers and executes too fast, but L0 segment number cannot be controled
Picked 2 segments out of 37 segments. Might need some config changes for varchar
For a uuid string(36), the actual size of PrimaryKey is 7 times of expected.
=== RUN TestVarCharPrimaryKey/size
primary_key_test.go:19:
Error Trace: /home/yangxuan/Github/milvus/internal/storage/primary_key_test.go:19
Error: Not equal:
expected: int(44)
actual : int64(296)
Test: TestVarCharPrimaryKey/size
Messages: uuid: f99f07ce-b546-4639-a24a-013929475a99
Is there an existing issue for this?
Environment
Current Behavior
server config
qn: 5*8c32g
test steps
test results
querynode memory usage During the target update, the qn memory fluctuated by 30%, about 10GiB. Please help confirm whether this is in line with expectations? Can it be optimized? FIY, the levelZeroForwardPolicy is RemoteLoad, segment maxSize is 2048
compaction trigger The compaction was triggered 18 minutes after the deletion started? Why does it take so long?
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
pods:
Anything else?
No response