harsha-simhadri / big-ann-benchmarks

Framework for evaluating ANNS algorithms on billion scale datasets.
https://big-ann-benchmarks.com
MIT License
356 stars 118 forks source link

Add support for pgvector's hnsw (0.7.4) and generic support for Postgres (16) indexes #309

Closed onurctirtir closed 1 week ago

onurctirtir commented 2 months ago

Closes https://github.com/harsha-simhadri/big-ann-benchmarks/issues/293.


This PR adds support for benchmarking pgvector's hnsw index-access-method with the runbooks and the datasets supported by bigann benchmarks.

To do that, added a base docker image that would help us testing other Postgres index-access-methods in the future. And to make use of that docker image, had to make some changes in install.py so that other Postgres based indexes can depend on a common docker image that has the Postgres installed already. Note that install.py will build that base docker image only if the algorithm name starts with "postgres-". If you see that this PR is not a draft one anymore, then I should've already documented this in the docs.

This PR also adds BaseStreamingANNPostgres that can be used to easily add support for other Postgres based indexes in the future. One would simply need to define a new python wrapper which implements:

and that properly sets the following attributes in their __init__ methods before calling super().__init__:

Given that pgvector's hnsw is the first Postgres-based-index that benefit from this infra (via this PR), neurips23/streaming/postgres-pgvector-hnsw/ can be seen as an example implementation on how to make use of Dockerfile.BasePostgres and BaseStreamingANNPostgres in general to add support for more Postgres based indexes.

Differently than other other algorithms under streaming, the time it takes to complete a runbook can be several times slower than what is for other algorithms. This is not because Postgres based indexes are bad, but because SQL is the only interface to such indexes. So, all those insert / delete / search operations first have to build the SQL queries, and, specifically for inserts, transferring the data to the Postgres server adds an important overhead. Unless we make some huge changes in this repo to re-design "insert" in a way that it could benefit from server-side-copy functionality of Postgres, we cannot do much about it. Other than that, please feel free to drop comments if you see any inefficiencies that I can quickly fix in my code. Note that I'm not a python expert, hence sincerely requesting this :)

And, to explain the build & query time params that have to be provided in such a Postgres based indexing algorithm's config.yaml file, let's take a look the the following snippet from pgvector's hnsw's config.yaml file:

random-xs:
    postgres-pgvector-hnsw:
      docker-tag: neurips23-streaming-postgres-pgvector-hnsw
      module: neurips23.streaming.postgres-pgvector-hnsw.wrapper
      constructor: PostgresPgvectorHnsw
      base-args: ["@metric"]
      run-groups:
        base:
          args: |
            [{"m":16, "ef_construction":64, "insert_conns":16}]
          query-args: |
            [{"ef_search":50, "query_conns":8}]

Presence of insert_conns & query_conns are enforced by BaseStreamingANNPostgres and any Postgres based index implementation that we add to this repo in the future must also provide values for them in their config.yaml files.

Other than those two params, any other parameters that need to be specified when building the index or when performing an index-scan (read as "search" step) via config.yaml too.

And while we're at it, let's take a closer look into how the python wrapper should look like when adding support for a Postgres based index in the future. From the wrapper added for pgvector's hnsw:

from neurips23.streaming.base_postgres import BaseStreamingANNPostgres

class PostgresPgvectorHnsw(BaseStreamingANNPostgres):
    def __init__(self, metric, index_params):
        self.name = "PostgresPgvectorHnsw"
        self.pg_index_method = "hnsw"
        self.guc_prefix = "hnsw"

        super().__init__(metric, index_params)

    # Can add support for other metrics here.
    def determine_index_op_class(self, metric):
        if metric == 'euclidean':
            return "vector_l2_ops"
        else:
            raise Exception('Invalid metric')

    # Can add support for other metrics here.
    def determine_query_op(self, metric):
        if metric == 'euclidean':
            return "<->"
        else:
            raise Exception('Invalid metric')