Open lospejos opened 11 months ago
The need for cleaning-up the files as we progress through logical decoding has not been a priority yet, due to the general availability of blob storage in the different cloud providers (and Unix mount point facilities associated with them) and the “infinite capacity” idea. That said it would be good to implement some kind of a cleanup, yes.
The tradeoff that's complex to orchestrate correctly is the need to reclaim disk space compared to the need to being able to debug something that went wrong after the fact.
We are trying to make migrations of Production database (~3TB size, plenty of changes constantly).
pgcopydb
version0.14.1.14.gbb2e3e0
built from sources.After completing the initial load and starting the logical replication phase, after some time (couple of days) replication stops with error complaining on storage free space (
no space left on device
).We see, that all free space is taken by files in
$HOME/.local/share/pgcopydb
files (mainly*.json
,*.sql
files).Why these files are still present in directory event after successful data replication? Is it a files cleanup error or this behavior is designed like this? Probably this indicates some other error (some part of
pgfcopydb
is stuck/hang and not cleaning processed files)?