Open erimatnor opened 4 years ago
@erimatnor I am facing the same issue, the backend worker scheduler process memory keep increasing. As per your comments, there some work around to deal with this issue, that is to set the timescaledb.ignore_invalidation_older_than parameter to limit the scan tuple number every aggregation loop, is it correct?
The Scanner API could benefit from a way to have a per-tuple memory context that gets cleared each iteration of the loop. This allows scanning unbounded number of tuples without eating a lot of memory.
This can, of course, be managed manually already today, but might be nice to expose as an convenience option for the Scanner.