Open MMquant opened 6 months ago
Thanks for reporting this issue @MMquant - This will be worked on after the final implementation of https://github.com/projectdiscovery/utils/pull/362. Handling this OOM case in interactsh requires some more attention as one of the security requirement is that data should be encrypted and volatile in RAM, so it's indeed more prone to OOM-killing. As a temporary mitigation, while the internal design is being reworked, I would recommend to try with using disk-storage option (that should reduce the pressure on heap) and (if possible) have some swap configured, so that the OS would try to move first some idle memory pages from interactsh process before proceeding to kill it to free memory.
Interactsh version:
docker image:
projectdiscovery/interactsh-server:v1.1.9
Current Behavior:
interactsh-server
memory usage is rising during runtime up to being killed by OOM or VM crash.Expected Behavior:
I would expect some memory management which regularly cleans up the
interactsh-server
memory or fix the memory leak.Steps To Reproduce:
I'm using this
Dockerfile
interactsh-server
configuarionI have to set memory limits in
docker-compose.yml
otherwise theinteractsh-server
process freezes the host VM. This way it is killed by OOM reaper and gets restarted.Grafana charts from cadvisor + prometheus which is monitoring containers on my VM.
Last 24 hours. You can see that the
interactsh-server
container reached the limit and got killed.Last 7 days. Container periodic restarts due to the OOM are evident. Notice the slope of the rising memory load. Previously I used the
interactsh-server
with SMB enabled. I thought that it's the SMB service what is causing the rising memory load. After disabling it the rising memory load still occurs, not as rapid as with SMB enabled though.