stac-utils / stac-fastapi-elasticsearch-opensearch

Elasticsearch backend for stac-fastapi with Opensearch support.
https://stac-utils.github.io/stac-fastapi-elasticsearch-opensearch
MIT License
28 stars 13 forks source link

Paginated search queries now don't return a token on the last page #243

Closed pedro-cf closed 4 months ago

pedro-cf commented 5 months ago

Related Issue(s):

Merge dependencie(s):

Description:

PR Checklist:

StijnCaerts commented 4 months ago

es_response["hits"]["total"]["value"] might not be accurate if there are more than 10,000 hits. Then a lower bound will be indicated with the "gte" relation in es_response["hits"]["total"]["relation"].

https://www.elastic.co/guide/en/elasticsearch/reference/current/search-your-data.html#track-total-hits

So I'd suggest to use the count from the search only if es_response["hits"]["total"]["relation"] == "eq". The async count tasks are still useful when the count exceeds the default threshold.

jonhealy1 commented 4 months ago

Maybe we can add to this test or do something similar to make sure that the last page isn't returning a token? #244

pedro-cf commented 4 months ago

Maybe we can add to this test or do something similar to make sure that the last page isn't returning a token? #244

Greetings, a test for this already exists test_pagination_item_collection.

It used to test for 7 requests for 6 items, now I've fixed it for 6 requests for 6 items.

https://github.com/stac-utils/stac-fastapi-elasticsearch-opensearch/pull/243/files#diff-a38bef06b3f69317891c409bc81044f53cca52841ea63ec9a6821ea08dea98f6L593-L605

test_item_search_temporal_window_timezone_get and test_pagination_post also tests this.

These are all tests I've fixed up.

pedro-cf commented 4 months ago

es_response["hits"]["total"]["value"] might not be accurate if there are more than 10,000 hits. Then a lower bound will be indicated with the "gte" relation in es_response["hits"]["total"]["relation"].

https://www.elastic.co/guide/en/elasticsearch/reference/current/search-your-data.html#track-total-hits

So I'd suggest to use the count from the search only if es_response["hits"]["total"]["relation"] == "eq". The async count tasks are still useful when the count exceeds the default threshold.

Greetings @StijnCaerts , thank you so much for the feedback.

Do you think this approach is correct?

        search_task = asyncio.create_task(
            self.client.search(
                index=index_param,
                ignore_unavailable=ignore_unavailable,
                body=search_body,
                size=limit,
            )
        )

        count_task = asyncio.create_task(
            self.client.count(
                index=index_param,
                ignore_unavailable=ignore_unavailable,
                body=search.to_dict(count=True),
            )
        )

        try:
            es_response = await search_task
        except exceptions.NotFoundError:
            raise NotFoundError(f"Collections '{collection_ids}' do not exist")

        hits = es_response["hits"]["hits"]
        items = (hit["_source"] for hit in hits)

        matched = es_response["hits"]["total"]["value"]
        if es_response["hits"]["total"]["relation"] != "eq":
            if count_task.done():
                try:
                    matched = count_task.result().get("count")
                except Exception as e:
                    logger.error(f"Count task failed: {e}")
        else:
            count_task.cancel()
  1. Assume matched = es_response["hits"]["total"]["value"] as default
  2. In the case es_response["hits"]["total"]["relation"] != "eq" use the count_task and if the count fails it will log the error but still use the default as backup
  3. Else use the default and cancel the count_task if it's still running (I assume count_task.cancel() is safe to call)
StijnCaerts commented 4 months ago

I think either getting the correct count or no count at all would be the preferred behaviour. The context extension is deprecated at this point, I don't know if there is an alternative available. Maybe we should handle this in a separate PR.

https://github.com/stac-api-extensions/context https://github.com/radiantearth/stac-api-spec/issues/396

pedro-cf commented 4 months ago

I think either getting the correct count or no count at all would be the preferred behaviour.

Is there situations where the count can fail ? Is this why count_task.result().get("count") is surrounded in a try/except?

With the addition of the page in the token it's critical to get a matched value everytime.

StijnCaerts commented 4 months ago

Is there situations where the count can fail ? Is this why count_task.result().get("count") is surrounded in a try/except?

Probably the same reason a search request could fail, eg. invalid collection, bad query, ...

Without an accurate count, it is impossible to tell if we're on the last page. The only case where you are sure you're on the last page is when the current page size is less than the limit.

jonhealy1 commented 4 months ago

I don't know why it's in a try except myself? A lot of the db stuff was done by an old contributor.

jonhealy1 commented 4 months ago

es_response["hits"]["total"]["value"] is accurate up to 10,000 results. If the actual count_task fails - which is probably unlikely - we can use this maybe because most people are not going to paginate through more than 10,000 results.

StijnCaerts commented 4 months ago

Indeed, if you are paging through more than 10,000 hits, I think there is no harm in 1 extra request with an empty response 😉

StijnCaerts commented 4 months ago

Another option would be to implement this workaround: https://stackoverflow.com/a/67200853/9339603

Pro's:

Con's:

pedro-cf commented 4 months ago

Another option would be to implement this workaround: https://stackoverflow.com/a/67200853/9339603

@StijnCaerts I've tried implementing this approach

        search_after = None

        if token:
            search_after = urlsafe_b64decode(token.encode()).decode().split(",")

        query = search.query.to_dict() if search.query else None

        index_param = indices(collection_ids)

        search_task = asyncio.create_task(
            self.client.search(
                index=index_param,
                ignore_unavailable=ignore_unavailable,
                query=query,
                sort=sort or DEFAULT_SORT,
                search_after=search_after,
                size=limit + 1,  # Fetch one more result than the limit
            )
        )

        count_task = asyncio.create_task(
            self.client.count(
                index=index_param,
                ignore_unavailable=ignore_unavailable,
                body=search.to_dict(count=True),
            )
        )

        try:
            es_response = await search_task
        except exceptions.NotFoundError:
            raise NotFoundError(f"Collections '{collection_ids}' do not exist")

        hits = es_response["hits"]["hits"]
        items = (hit["_source"] for hit in hits[:limit])

        next_token = None
        if len(hits) > limit:
            if hits and (sort_array := hits[limit - 1].get("sort")):
                next_token = urlsafe_b64encode(
                    ",".join([str(x) for x in sort_array]).encode()
                ).decode()

        matched = None
        if count_task.done():
            try:
                matched = count_task.result().get("count")
            except Exception as e:
                logger.error(f"Count task failed: {e}")

        return items, matched, next_token

but I'm getting these errors on these tests:

FAILED stac_fastapi/tests/api/test_api.py::test_app_query_extension_limit_gt10000 - elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or e...
FAILED stac_fastapi/tests/api/test_api.py::test_app_query_extension_limit_10000 - elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or e...
pedro-cf commented 4 months ago

As I understand elasticsearch itself doesn't allow limits above 10,000 (by default), so when you pass 10,000 +1 it responds with a bad request.

I was a bit confused from why these tests weren't failing on the main branch but it's because of this from stac-fastapi changelog:

  • Limit values above 10,000 are now replaced with 10,000 instead of returning a 400 error (#526)
pedro-cf commented 4 months ago

Ended up applying this https://github.com/stac-utils/stac-fastapi-elasticsearch-opensearch/pull/243#issuecomment-2096698581 with edge case handling, in this case the 10,000.

pedro-cf commented 4 months ago

what do you think @jonhealy1 ?

I don't really like this hardcoded 10,000, but it's basically set by:

jonhealy1 commented 4 months ago

Can we import the max limit from stac-fastapi?

pedro-cf commented 4 months ago

Can we import the max limit from stac-fastapi?

Yes, we can use this:

import stac_fastapi.types.search
max_result_window =  stac_fastapi.types.search.Limit.le

or would you prefer this:

from stac_fastapi.types.search import Limit
max_result_window = Limit.le

I prefer option 1 since it makes it clear where the limit comes from.

jonhealy1 commented 4 months ago

I prefer option one too

pedro-cf commented 4 months ago

I prefer option one too

added

StijnCaerts commented 4 months ago

I just left one small remark. Otherwise it looks good to me, thanks for the contribution!