timvisee / send

:mailbox_with_mail: Simple, private file sharing. Mirror of https://gitlab.com/timvisee/send
https://send.vis.ee
Mozilla Public License 2.0
4.35k stars 250 forks source link

Memory leak using Docker container #12

Open nodesocket opened 3 years ago

nodesocket commented 3 years ago

Can't 💯 confirm, but it looks like there may be a memory leak. I am hosting my own version of send on AWS Lightsail using their container service (essentially ECS). Memory usage is continuously increasing linearly. Running latest version of send v3.4.5 via Docker container.

Screen Shot 2021-03-23 at 5 34 36 PM

Screen Shot 2021-03-23 at 5 34 47 PM

Screen Shot 2021-03-23 at 5 34 52 PM

timvisee commented 3 years ago

Thanks for the report! Yeah, that's definitely increasing.

Do your images show the total memory usage of the full host, of all running containers (I'm assuming you're running more, like this), or just the send container specifically?

I can confirm that I see some growth in memory usage on my public instance, though it seems to stop and even out after a while.

image (2 week period, from 32.2% to 33.4%, restarted for an update in the middle, 28.8% to 30.6%, memory usage of full system running these containers)

nodesocket commented 3 years ago

The send Lightsail container service is only running send and redis:6.2.1-buster. I suppose it could be redis that is increasing memory usage, but honestly not using send much (maybe a total of 10 uploads). So, not sure why redis memory usage would be steadily increasing.

I could try switching to redis:6.2.1-alpine instead of using Debian Buster.

Curious, where are you hosting your public instance?

nodesocket commented 3 years ago

@timvisee perhaps related, if I don't set the envar FILE_DIR the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?

nodesocket commented 2 years ago

Indeed this is still happening and looks like the containers crash and thus unfortunately causes Redis to also crash, expiring all outstanding links 😢 😠 . Running both send and redis in the same AWS Lightsail container task. Can send over that configuration if it helps.

The only metrics I really have are:

Screen Shot 2022-03-09 at 4 09 23 PM Screen Shot 2022-03-09 at 4 09 16 PM

nodesocket commented 2 years ago

@timvisee here is the AWS Lightsail container configuration if it helps. Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all? I would be absolutely shocked if the memory leak is in redis:6.2.5-alpine3.14

Screen Shot 2022-03-09 at 9 31 15 PM

Screen Shot 2022-03-09 at 9 34 43 PM

timvisee commented 2 years ago

Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all?

I don't think so. Either way, they're separate containers. A service is a virtual context to help 'link' things together.

I did monitor the send.vis.ee instance for a while again. I don't see any of this weirdness.

I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.

perhaps related, if I don't set the envar FILE_DIR the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?

Files are stored in a random temporary directory by default. See:

https://github.com/timvisee/send/blob/742b5de7e1c9322711b66acf82d0358104e6ece4/server/config.js#L173-L177

nodesocket commented 2 years ago

I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.

I am setting the envar NODE_ENV to production that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.

If you think it makes sense, I can try running send locally and leave it up for a few days and see if I can replicate the memory leak.

timvisee commented 2 years ago

I am setting the envar NODE_ENV to production that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.

Yes, that's right. I wonder if it would affect it, I mean, I assume it to be just a Docker container, right.

If you think it makes sense, I can try running send locally and leave it up for a few days and see if I can replicate the memory leak.

That would be awesome. You might need to send some traffic to it though, in a similar pattern to your hosted instance.

nodesocket commented 2 years ago

@timvisee I tried just for fun switching Redis to use the following image tag redis:6.2.6-bullseye instead of using Alpine. Unfortunately, same behavior. This is a graph of memory usage for the last day in AWS Lightsail. Testing send locally is gonna be a bit of work for me, but I will get around to it.

Screen Shot 2022-03-15 at 6 22 31 PM
nodesocket commented 2 years ago

This has to be something either in the code, or a problem with hosting container on AWS Lightsail. Looks like memory usage grows for 7 days, then the process restarts. Upon, restart though, all the outstanding links expire which is also a flag to me.

Memory

Screen Shot 2022-03-28 at 4 41 36 PM

CPU

Screen Shot 2022-03-28 at 4 43 37 PM