Closed lemon24 closed 2 years ago
I managed to reproduce the /
leak locally under ./run.sh serve-dev
; it's interesting that memory usage doesn't increase with every request, but maybe 1 in 4 (more often, initially).
OK, 16486d1 decreases maxrss for /
from 115 MiB to 75 MiB, but doesn't do anything for /?limit=64
, and still doesn't solve the leak (refreshing the page still increases maxrss).
I tried to use filprofiler like this:
FLASK_DEBUG=1 \
FLASK_APP=src/reader/_app/wsgi.py \
READER_DB=db.sqlite \
fil-profile run -m \
flask run -p 8000 --no-reload --without-threads --no-debugger
# in another terminal
for i in {1..10}; do curl -o/dev/null 'http://127.0.0.1:8000/?limit=64'; done
... but the results aren't really conclusive:
100 requests | 50 requests | 10 requests | |
---|---|---|---|
Reader.get_entries() | 2.2 MB | 2.2 MB | 1.2 MB |
_app.read_time() | 2.5 MB | 2.5 MB | not in report |
My nano instance ends up hanging every week due to running out of memory.
The main users are:
reader update --workers 4
process running on the hour (RSS measurements here)Mitigations (that don't have to do with reader):
earlyoom --prefer uwsgi
orearlyoom --prefer 'reader update --workers'
Some observations:
/
and/entry
(and not cached)./
and/?limit=64
are leaking memory (keep refresing page, see worker RSS increase)./
increases worker RSS to over 100MB./entry
is leaking memory as well./enclosure-tags
is not leaking memory.This issue is to look if anything can be fixed on the reader side.