Closed pedro-cf closed 4 months ago
es_response["hits"]["total"]["value"]
might not be accurate if there are more than 10,000 hits. Then a lower bound will be indicated with the "gte"
relation in es_response["hits"]["total"]["relation"]
.
So I'd suggest to use the count from the search only if es_response["hits"]["total"]["relation"] == "eq"
. The async count tasks are still useful when the count exceeds the default threshold.
Maybe we can add to this test or do something similar to make sure that the last page isn't returning a token? #244
Maybe we can add to this test or do something similar to make sure that the last page isn't returning a token? #244
Greetings, a test for this already exists test_pagination_item_collection
.
It used to test for 7 requests for 6 items, now I've fixed it for 6 requests for 6 items.
test_item_search_temporal_window_timezone_get
and test_pagination_post
also tests this.
These are all tests I've fixed up.
es_response["hits"]["total"]["value"]
might not be accurate if there are more than 10,000 hits. Then a lower bound will be indicated with the"gte"
relation ines_response["hits"]["total"]["relation"]
.So I'd suggest to use the count from the search only if
es_response["hits"]["total"]["relation"] == "eq"
. The async count tasks are still useful when the count exceeds the default threshold.
Greetings @StijnCaerts , thank you so much for the feedback.
Do you think this approach is correct?
search_task = asyncio.create_task(
self.client.search(
index=index_param,
ignore_unavailable=ignore_unavailable,
body=search_body,
size=limit,
)
)
count_task = asyncio.create_task(
self.client.count(
index=index_param,
ignore_unavailable=ignore_unavailable,
body=search.to_dict(count=True),
)
)
try:
es_response = await search_task
except exceptions.NotFoundError:
raise NotFoundError(f"Collections '{collection_ids}' do not exist")
hits = es_response["hits"]["hits"]
items = (hit["_source"] for hit in hits)
matched = es_response["hits"]["total"]["value"]
if es_response["hits"]["total"]["relation"] != "eq":
if count_task.done():
try:
matched = count_task.result().get("count")
except Exception as e:
logger.error(f"Count task failed: {e}")
else:
count_task.cancel()
matched = es_response["hits"]["total"]["value"]
as defaultes_response["hits"]["total"]["relation"] != "eq"
use the count_task and if the count fails it will log the error but still use the default as backupI think either getting the correct count or no count at all would be the preferred behaviour. The context extension is deprecated at this point, I don't know if there is an alternative available. Maybe we should handle this in a separate PR.
https://github.com/stac-api-extensions/context https://github.com/radiantearth/stac-api-spec/issues/396
I think either getting the correct count or no count at all would be the preferred behaviour.
Is there situations where the count can fail ? Is this why count_task.result().get("count")
is surrounded in a try/except?
With the addition of the page in the token it's critical to get a matched value everytime.
Is there situations where the count can fail ? Is this why
count_task.result().get("count")
is surrounded in a try/except?
Probably the same reason a search request could fail, eg. invalid collection, bad query, ...
Without an accurate count, it is impossible to tell if we're on the last page. The only case where you are sure you're on the last page is when the current page size is less than the limit.
I don't know why it's in a try except myself? A lot of the db stuff was done by an old contributor.
es_response["hits"]["total"]["value"] is accurate up to 10,000 results. If the actual count_task fails - which is probably unlikely - we can use this maybe because most people are not going to paginate through more than 10,000 results.
Indeed, if you are paging through more than 10,000 hits, I think there is no harm in 1 extra request with an empty response 😉
Another option would be to implement this workaround: https://stackoverflow.com/a/67200853/9339603
Pro's:
$.hits.total.relation
: eq
/ gte
)Con's:
limit=0
Another option would be to implement this workaround: https://stackoverflow.com/a/67200853/9339603
@StijnCaerts I've tried implementing this approach
search_after = None
if token:
search_after = urlsafe_b64decode(token.encode()).decode().split(",")
query = search.query.to_dict() if search.query else None
index_param = indices(collection_ids)
search_task = asyncio.create_task(
self.client.search(
index=index_param,
ignore_unavailable=ignore_unavailable,
query=query,
sort=sort or DEFAULT_SORT,
search_after=search_after,
size=limit + 1, # Fetch one more result than the limit
)
)
count_task = asyncio.create_task(
self.client.count(
index=index_param,
ignore_unavailable=ignore_unavailable,
body=search.to_dict(count=True),
)
)
try:
es_response = await search_task
except exceptions.NotFoundError:
raise NotFoundError(f"Collections '{collection_ids}' do not exist")
hits = es_response["hits"]["hits"]
items = (hit["_source"] for hit in hits[:limit])
next_token = None
if len(hits) > limit:
if hits and (sort_array := hits[limit - 1].get("sort")):
next_token = urlsafe_b64encode(
",".join([str(x) for x in sort_array]).encode()
).decode()
matched = None
if count_task.done():
try:
matched = count_task.result().get("count")
except Exception as e:
logger.error(f"Count task failed: {e}")
return items, matched, next_token
but I'm getting these errors on these tests:
FAILED stac_fastapi/tests/api/test_api.py::test_app_query_extension_limit_gt10000 - elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or e...
FAILED stac_fastapi/tests/api/test_api.py::test_app_query_extension_limit_10000 - elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or e...
As I understand elasticsearch itself doesn't allow limits above 10,000 (by default), so when you pass 10,000 +1 it responds with a bad request.
I was a bit confused from why these tests weren't failing on the main branch but it's because of this from stac-fastapi changelog:
- Limit values above 10,000 are now replaced with 10,000 instead of returning a 400 error (#526)
Ended up applying this https://github.com/stac-utils/stac-fastapi-elasticsearch-opensearch/pull/243#issuecomment-2096698581 with edge case handling, in this case the 10,000.
what do you think @jonhealy1 ?
I don't really like this hardcoded 10,000, but it's basically set by:
Can we import the max limit from stac-fastapi?
Can we import the max limit from stac-fastapi?
Yes, we can use this:
import stac_fastapi.types.search
max_result_window = stac_fastapi.types.search.Limit.le
or would you prefer this:
from stac_fastapi.types.search import Limit
max_result_window = Limit.le
I prefer option 1 since it makes it clear where the limit comes from.
I prefer option one too
I prefer option one too
added
I just left one small remark. Otherwise it looks good to me, thanks for the contribution!
Related Issue(s):
Merge dependencie(s):
Description:
test_pagination_token_idempotent
had and indentation issueexecute_search
to make use ofes_response["hits"]["total"]["value"]
PR Checklist:
pre-commit run --all-files
)make test
)