When requesting, parsing and inserting every available document into the database (24 languages * 5,555 docs), python steadily increases its memory allocation up to 3.5GB shortly before finishing.
The source of this might be some large variables not getting properly overwritten or deleted by the garbage collector.
This problem should be fixed, so the code can be run on lower performance hardware, since CPU usage is already pretty low.
Cannot be reproduced as of b79d7d233a117cedd5a6f35c5250b4e47b5ddf5e. Maybe the garbage collector just acted up on the database initialisation this occured on first.
When requesting, parsing and inserting every available document into the database (24 languages * 5,555 docs), python steadily increases its memory allocation up to 3.5GB shortly before finishing. The source of this might be some large variables not getting properly overwritten or deleted by the garbage collector. This problem should be fixed, so the code can be run on lower performance hardware, since CPU usage is already pretty low.