Closed kotik-adjust closed 1 year ago
Hello, possible reasons are: 1) there is long running transaction on the server (or on replica with hot_standby_feedback = on) or non-default setting for vacuum_defer_cleanup_age which prevent vacuum from removing dead rows. 2) vacuum on truncate phase couldn't get an exclusive lock on the table (it tries for five seconds each 50ms if I remember correctly in cycle and then give up) because of other transactions which uses this table and holding locks.
more information would be on vacuum verbose tablename
output. stopping truncate due to conflicting lock request
line would mean reason 2).
Hey @alexius2 Thank you very much for your answer
1) hot_standby_feedback
is indeed on
. Is that a reason?
2) One more nuance: the table in question is part of a partitioned table. Does that make any difference?
postgres@esh-acq-db-2 ~ $ psql -c "show hot_standby_feedback;"
hot_standby_feedback
----------------------
on
(1 row)
postgres@esh-acq-db-2 ~ $ psq -c "show vacuum_defer_cleanup_age;"
vacuum_defer_cleanup_age
--------------------------
0
(1 row)
1) could be, not enough information to know for sure. if there was long running transaction or significant replication lag during pgcompacttable run - then yes. 2) no, I don't think so, table partitions would work here just like regular tables.
old_snapshot_threshold = 3h
maybe?
“When this feature is enabled, freed space at the end of a relation cannot be released to the operating system, since that could remove information needed to detect the snapshot too old condition.”
Most definitely yes... The way i implemented in pgcompacttable compression algorithm can't work with non-zero old_snapshot_threshold, pgcompacttable didn't check it because when I wrote the code - old_snapshot_threshold hadn't exist yet.
Some people say old_snapshot_threshold
should be removed from Postgres because it's evil :)
alter system set old_snapshot_threshold to DEFAULT;
+ restart Postgres helped
Very good tool. 5/5 ⭐
Why is my table not getting compacted? :)
Time and again I get the following results: Processing results: 30430 pages (66700 pages including toasts and indexes), size has been reduced by 0.000B (0.000B including toasts and indexes) in total. This attempt has been initially expected to compact ~24% more space (7444 pages, 58.161MB)