Closed zigster64 closed 1 year ago
after a week++ running with the "anti-memory-leak" fixes
Trying a new approach now - using the updated http.zig, which has the pool fixes, plus some more fine tuning options, plus doing a releaseFast build
Lets run that up for a week on AWS and see what happens
In the meantime - base memory use is now around 3-4 MB (big improvement from 24MB on the previous version)
Looks to be fixed now
Logs show better detail - its remained on 10260 1k blocks now for 3 days straight. The AWS graphs are a bit inflated I think - they dont exactly match what the application is reporting in terms of max_rss, so I am putting that extra few bits of memory creep down to something else in the docker container that is growing ?
Main differences - cut back the embedded file to all be < 32k each
Upgraded to latest http.zig, which has option for further fine tuing
There remains 1 teeny memory leak, which I think is related to having the resReqPool growing when the pool is full.
It does correctly free the resources at the end - but maxis remains bumped. Unless there is something else being allocated inside a response or request that is not being freed ? Hard to tell
Running an experiment now with the pool-overflow branch of http.zig, set the pool size to 32, and shrink the audio files down so that the response buffers can be shrunk as well
Experimenting on local, this gives me a build that has zero allocations - even during DOS attack, so lets run this up on AWS for a week and see how it copes