Closed wesbragagt closed 2 months ago
@wesbragagt Here is what I'd recommend:
If temp_file_limit
is set to -1
, postgres thinks that it has unlimited disk space for intermediate calculations. If you are running expensive operations, this can run into issues like you are seeing. I'd recommend you set to a static number (10-25GB) and ensure you have enough disk space allocated for (a) that temporary storage, (b) you WAL logs, and (c) the main database.
Usually lots of disk space is used if you have expensive intermediate calculations. To optimize this, use EXPLAIN (VERBOSE, BUFFERS, ANALYZE)
to diagnose bottlenecks and space usage by the query. In particular, you should ensure you have indices on each of the fields that you are sorting, joining, partitioning, and grouping by.
Is it possible that the result set from this query is massive? If so, you might attempt setting a LIMIT.
Noted the suggestions and I'm going to be taking action on tuning it. Thank you for the clear recommendation @fullykubed.
Prior Search
What is your question?
We are facing an issue with a query that keeps logging in the primary pod. I've attempted to restart the pod to clear some memory usage and verified that our temp_size is set to -1. We have 20gb initial storage set for our database. I'm looking for guidance to better tune the system for these errors as much as possible. I know some query operations can always be more efficient. Perhaps setting a temp_size limit would be a good idea?
Query:
What primary components of the stack does this relate to?
terraform
Code of Conduct