Suwayomi / Suwayomi-Server

A rewrite of Tachiyomi for the Desktop
Mozilla Public License 2.0
3.84k stars 198 forks source link

[Feature Request] Add a memory limit suggestion in README or feature #481

Open JBKing514 opened 1 year ago

JBKing514 commented 1 year ago

What feature should be added to Tachidesk?

Add a memory limit suggestion in README or RAM limiter feature

Why/Project's Benefit/Existing Problem

I have been running Tachidesk in my home server's docker env for a while. Still, recently I found that Tachidesk takes up extra memory after running for a long time(e.g. I left it overnight and it can take over 1GB of extra memory), I don't know much about the project, can you add a suggestion about memory limit or add a function to limit memory in the readme file, thanks.

btw: To solve this temporarily I set a 512MB memory limit for my Tachidesk container and it works fine.

Trung0246 commented 1 year ago

Isn't java already have those memory-related flags with -Xms and -Xmx?

SilverBull commented 1 year ago

btw: To solve this temporarily I set a 512MB memory limit for my Tachidesk container and it works fine.

Can you post the instructions to set the memory limit of the container? I have the same problem, actually I was restarting the container every few days because it was taking up ram and I couldn't figure out why

darktorana commented 4 months ago

I'd absolutely like to back this one, my Docker container gets up to about 30GB before it's so ridiculously slow I have to restart it. It usually takes about 24-48 hours to get up to 30GB depending on how much I use it.

Here's the container after less than 24 hours: [d4rk ~]$ ps fauxww | grep suwayomi | grep -v grep docker 1096402 238 8.5 54383404 16857612 ? Ssl Apr08 1971:28 \_ java -Duser.home=/home/suwayomi -jar /home/suwayomi/startup/tachidesk_latest.jar It's using over 16GB of RAM, which seems quite excessive! How would we go about setting a limit on it, I believe adding in the -Xms & -Xmx would force garbage collection more often and vastly improve the performance of the application (at least for me).

Let me know if you need any more information about the issue.

Robonau commented 4 months ago

I'd absolutely like to back this one, my Docker container gets up to about 30GB before it's so ridiculously slow I have to restart it. It usually takes about 24-48 hours to get up to 30GB depending on how much I use it.

Here's the container after less than 24 hours: [d4rk ~]$ ps fauxww | grep suwayomi | grep -v grep docker 1096402 238 8.5 54383404 16857612 ? Ssl Apr08 1971:28 \_ java -Duser.home=/home/suwayomi -jar /home/suwayomi/startup/tachidesk_latest.jar It's using over 16GB of RAM, which seems quite excessive! How would we go about setting a limit on it, I believe adding in the -Xms & -Xmx would force garbage collection more often and vastly improve the performance of the application (at least for me).

Let me know if you need any more information about the issue.

you limit it like you would any container, exactly how depends on how you are running it but assuming docker compose

services:
  suwayomi:
    image: ghcr.io/suwayomi/tachidesk
    ...
    deploy:
      resources:
        limits:
          memory: 512M
darktorana commented 4 months ago

Hi Robonau, Excuse my stupidity, for reasons I can't possibly explain I was thinking that would limit the grow space of Java it would just run out of space, not properly do the garbage collection like Xms -Xmx would make it do. Then I had my morning coffee and realised it's because I'm an idiot, and obviously it would force Java to do it's proper memory collection, just like it would on any system without enough ram.

tl;dr I'm an idiot.

pjft commented 2 months ago

I just installed this and sure enough, after a few hours I saw memory being used up by this container. Came here to ask about this and you all had already brought it up. Thanks for the pointers - adding the limits to the docker-compose file seems to help for now!