Open agandra30 opened 2 weeks ago
@Sheharyar570 Could you please share any experiences or advice you have regarding the building
performance of PGVectorScale's DiskANN
? We would greatly appreciate it!
@Sheharyar570 Could you please share any experiences or advice you have regarding the
building
performance ofPGVectorScale's DiskANN
? We would greatly appreciate it!
Maybe we just need to give a user a timeout config?
I ran multiple tests with PGvector scale using diskann and one of the biggest problem is that the vectordbbench just exits the execution with optmize timeout , this is very true for PGVECTOR SCALE and pgvectorRS.
I know we can increase the time in the scripts , but did anyone observed or recommend any configuration settings that can complete the execution with in the 5hours(default timeout) for 10M cohere 768 Dimension dataset ,we want to look at the cross comparison with out editing the default time outs for a large datastes , did any successfully completed with in that timeout (Milvus Yes , but other DBs ?)
Error message :
024-09-17 22:14:02,230 | WARNING: VectorDB optimize timeout in 18000 (task_runner.py:249) (3816719) 2024-09-17 22:14:02,274 | WARNING: Failed to run performance case, reason = Performance case optimize timeout (task_runner.py:191) (3816719) Traceback (most recent call last): File "/root/vectordbbench_runs/lib/python3.12/site-packages/vectordb_bench/backend/task_runner.py", line 247, in _optimize return future.result(timeout=self.ca.optimize_timeout)[1]
Query:
`CREATE INDEX IF NOT EXISTS "pgvectorscale_index" ON public. "pg_vectorscale_collection" USING "diskann" (embedding "vector_cosine_ops" ) WITH ( "storage_layout" = "memory_optimized", "num_neighbors" = "50", "search_list_size" = "100", "max_alpha" = "1.2", "num_bits_per_dimension" = "2" ); (pgvectorscale.py:200) (3935818)
`
My Postgres server infra configuration :
- Installed on baremetal Ubuntu 22
- postgre 16.4 v(untuned)
- Memory available :
# free -mh total used free shared buff/cache available Mem: 1.0Ti 13Gi 986Gi 152Mi 7.3Gi 988Gi Swap: 0B 0B 0B
4. Extensions used :` pgdiskann=# \dx; List of installed extensions Name | Version | Schema | Description -------------+---------+------------+------------------------------------------------------ plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods vectorscale | 0.3.0 | public | pgvectorscale: Advanced indexing for vector data (3 rows)
`
BTW, I don't think it's a wise choice to run 10M or more data on PG vector, it is simply too slow.
@Sheharyar570 Could you please share any experiences or advice you have regarding the
building
performance ofPGVectorScale's DiskANN
? We would greatly appreciate it!Maybe we just need to give a user a timeout config?
A timeout config is exactly useful for me too!
@KendrickChou @xiaofan-luan I will consider adding a timeout
setting in the UI.
Maybe we just need to give a user a timeout config?
A timeout config is exactly useful for me too!
We have set different default timeout
based on the size of the dataset. Currently, the timeout
config can be modified through code.
https://github.com/zilliztech/VectorDBBench/blob/b364fe316f72c86809d3203dc2b75437e9eabc90/vectordb_bench/__init__.py#L40-L56
@agandra30 Well, I've not tried running 10M cohere on PG Vector Scale. So I won't be able to suggest any specific configuration.
Although, I would suggest to make maintenance_work_mem
larger than the index_size
, but first you would need create the index to get index_size
.
But you still may need to update the default timeout.
I ran multiple tests with PGvector scale using diskann and one of the biggest problem is that the vectordbbench just exits the execution with optmize timeout , this is very true for PGVECTOR SCALE and pgvectorRS.
I know we can increase the time in the scripts , but did anyone observed or recommend any configuration settings that can complete the execution with in the 5hours(default timeout) for 10M cohere 768 Dimension dataset ,we want to look at the cross comparison with out editing the default time outs for a large datastes , did any successfully completed with in that timeout (Milvus Yes , but other DBs ?)
Error message :
024-09-17 22:14:02,230 | WARNING: VectorDB optimize timeout in 18000 (task_runner.py:249) (3816719) 2024-09-17 22:14:02,274 | WARNING: Failed to run performance case, reason = Performance case optimize timeout (task_runner.py:191) (3816719) Traceback (most recent call last): File "/root/vectordbbench_runs/lib/python3.12/site-packages/vectordb_bench/backend/task_runner.py", line 247, in _optimize return future.result(timeout=self.ca.optimize_timeout)[1]
Query:
`CREATE INDEX IF NOT EXISTS "pgvectorscale_index" ON public. "pg_vectorscale_collection"
USING "diskann" (embedding "vector_cosine_ops" ) WITH ( "storage_layout" = "memory_optimized", "num_neighbors" = "50", "search_list_size" = "100", "max_alpha" = "1.2", "num_bits_per_dimension" = "2" ); (pgvectorscale.py:200) (3935818)
`
My Postgres server infra configuration :
# free -mh total used free shared buff/cache available Mem: 1.0Ti 13Gi 986Gi 152Mi 7.3Gi 988Gi Swap: 0B 0B 0B
` pgdiskann=# \dx; List of installed extensions Name | Version | Schema | Description
-------------+---------+------------+------------------------------------------------------ plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods vectorscale | 0.3.0 | public | pgvectorscale: Advanced indexing for vector data (3 rows)
`