Open metasnw opened 6 years ago
suggestion: also take a look at the vacuum settings of metasfresh-db
General setting in postgresql.conf
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.05
Further reading: https://medium.com/contactually-engineering/postgres-at-scale-query-performance-and-autovacuuming-for-large-tables-d7e8ad40b16b
In particualr for this table, we might want to be more strict:
ALTER TABLE public.t_query_selection SET (autovacuum_vacuum_scale_factor = 0);
ALTER TABLE public.t_query_selection SET (autovacuum_vacuum_threshold = 10000);
Further infos: https://blog.2ndquadrant.com/autovacuum-tuning-basics/
WRT changing the settings, this might be a good approach: https://stackoverflow.com/a/42508925/1012103
AFAIU we could put those SQLs into our create_db.sh
in metasfresh-docker
Is this a bug or feature request?
bug
What is the current behavior?
when you import 30K products or 30K partners the table will grow 60-120 GB
Which are the steps to reproduce?
problems:
What is the expected or desired behavior?
should work lean as before as I dont recall this behaviour on imports one a while ago