projectdiscovery / interactsh

An OOB interaction gathering server and client library
https://app.interactsh.com
MIT License
3.35k stars 356 forks source link

Rising memory load leads to the `interactsh-server` being killed by OOM reaper or VM crash #824

Open MMquant opened 6 months ago

MMquant commented 6 months ago

Interactsh version:

docker image: projectdiscovery/interactsh-server:v1.1.9

Current Behavior:

interactsh-server memory usage is rising during runtime up to being killed by OOM or VM crash.

Expected Behavior:

I would expect some memory management which regularly cleans up the interactsh-server memory or fix the memory leak.

Steps To Reproduce:

I'm using this Dockerfile

FROM projectdiscovery/interactsh-server:v1.1.9

WORKDIR /app

RUN apk add --update --no-cache \
      iproute2=6.3.0-r0 \
      py3-pip \
      samba-client=4.18.9-r0

RUN pip3 install impacket==0.11.00

RUN mkdir -p /app/www

ADD config.yaml /app
ADD dns.yaml /app
ADD run.sh /app
ADD index.htm /app
ADD healthcheck.sh /app

ADD www /app/www/

ENTRYPOINT ["/app/run.sh"]

interactsh-server configuarion

# config.yaml
domain: [INTERACTSH_DOMAIN]
ip: PUBLIC_STATIC_IP
listen-ip: PUBLIC_STATIC_IP
eviction: 30
token: INTERACTSH_TOKEN
scan-everywhere: true
cert: /etc/letsencrypt/live/INTERACTSH_DOMAIN/cert.pem
privkey: /etc/letsencrypt/live/INTERACTSH_DOMAIN/privkey.pem
origin-ip-header: X-Real-Ip
dynamic-resp: true
custom-records: /app/dns.yaml
http-index: /app/index.htm
http-directory: /app/www/
disable-version: true
dns-port: 53
http-port: 1080
https-port: 10443
smtp-port: 25
smtps-port: 587
smtp-autotls-port: 465
ldap-port: 389
ldap: true
wildcard: true
ftp: true
ftp-port: 21
debug: false
metrics: false

I have to set memory limits in docker-compose.yml otherwise the interactsh-server process freezes the host VM. This way it is killed by OOM reaper and gets restarted.

# docker-compose.yml
version: '3.3'
services:
  interactsh:
    restart: always
...
    mem_limit: 200m
    memswap_limit: 200m
...

Grafana charts from cadvisor + prometheus which is monitoring containers on my VM.

Last 24 hours. You can see that the interactsh-server container reached the limit and got killed. Screenshot 2024-03-11 at 11 37 22

Last 7 days. Container periodic restarts due to the OOM are evident. Notice the slope of the rising memory load. Previously I used the interactsh-server with SMB enabled. I thought that it's the SMB service what is causing the rising memory load. After disabling it the rising memory load still occurs, not as rapid as with SMB enabled though. Screenshot 2024-03-11 at 11 37 43

Mzack9999 commented 6 months ago

Thanks for reporting this issue @MMquant - This will be worked on after the final implementation of https://github.com/projectdiscovery/utils/pull/362. Handling this OOM case in interactsh requires some more attention as one of the security requirement is that data should be encrypted and volatile in RAM, so it's indeed more prone to OOM-killing. As a temporary mitigation, while the internal design is being reworked, I would recommend to try with using disk-storage option (that should reduce the pressure on heap) and (if possible) have some swap configured, so that the OS would try to move first some idle memory pages from interactsh process before proceeding to kill it to free memory.