We are encountering numerous errors when using stac-fastapi-pgstac for POST /search requests, such as the following:
DETAIL: Process 20810 waits for AccessShareLock on relation 18359 of database 16399; blocked by process 600.
Process 600 waits for AccessExclusiveLock on relation 152918537 of database 16399; blocked by process 20810.
HINT: See server log for query details.
In this instance, 18359 refers to the partition_steps materialized view, and 152918537 refers to the _items_491914_202410 (item partition). It seems likely that this issue is caused by an ongoing ingestion process (using pypgstac) during the search, which triggers a table lock on the mentioned partition. I'm curious about how the partition_steps view might be impacted by the ingestion and what strategies can mitigate this issue. Could the update_partition_stats_q function be contributing to the problem? For context: the usage queue is enabled, but perhaps the frequency at which it is triggered needs adjustment.
We are encountering numerous errors when using stac-fastapi-pgstac for POST
/search
requests, such as the following:In this instance,
18359
refers to thepartition_steps
materialized view, and152918537
refers to the_items_491914_202410
(item partition). It seems likely that this issue is caused by an ongoing ingestion process (using pypgstac) during the search, which triggers a table lock on the mentioned partition. I'm curious about how thepartition_steps
view might be impacted by the ingestion and what strategies can mitigate this issue. Could theupdate_partition_stats_q
function be contributing to the problem? For context: the usage queue is enabled, but perhaps the frequency at which it is triggered needs adjustment.