Open nodesocket opened 3 years ago
Thanks for the report! Yeah, that's definitely increasing.
Do your images show the total memory usage of the full host, of all running containers (I'm assuming you're running more, like this), or just the send
container specifically?
I can confirm that I see some growth in memory usage on my public instance, though it seems to stop and even out after a while.
(2 week period, from 32.2% to 33.4%, restarted for an update in the middle, 28.8% to 30.6%, memory usage of full system running these containers)
The send Lightsail container service is only running send and redis:6.2.1-buster
. I suppose it could be redis that is increasing memory usage, but honestly not using send much (maybe a total of 10 uploads). So, not sure why redis memory usage would be steadily increasing.
I could try switching to redis:6.2.1-alpine
instead of using Debian Buster.
Curious, where are you hosting your public instance?
@timvisee perhaps related, if I don't set the envar FILE_DIR
the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?
Indeed this is still happening and looks like the containers crash and thus unfortunately causes Redis to also crash, expiring all outstanding links 😢 😠. Running both send and redis in the same AWS Lightsail container task. Can send over that configuration if it helps.
The only metrics I really have are:
@timvisee here is the AWS Lightsail container configuration if it helps. Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all? I would be absolutely shocked if the memory leak is in redis:6.2.5-alpine3.14
Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all?
I don't think so. Either way, they're separate containers. A service is a virtual context to help 'link' things together.
I did monitor the send.vis.ee instance for a while again. I don't see any of this weirdness.
I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.
perhaps related, if I don't set the envar
FILE_DIR
the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?
Files are stored in a random temporary directory by default. See:
I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.
I am setting the envar NODE_ENV
to production
that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.
If you think it makes sense, I can try running send
locally and leave it up for a few days and see if I can replicate the memory leak.
I am setting the envar
NODE_ENV
toproduction
that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.
Yes, that's right. I wonder if it would affect it, I mean, I assume it to be just a Docker container, right.
If you think it makes sense, I can try running
send
locally and leave it up for a few days and see if I can replicate the memory leak.
That would be awesome. You might need to send some traffic to it though, in a similar pattern to your hosted instance.
@timvisee I tried just for fun switching Redis to use the following image tag redis:6.2.6-bullseye
instead of using Alpine. Unfortunately, same behavior. This is a graph of memory usage for the last day in AWS Lightsail. Testing send locally is gonna be a bit of work for me, but I will get around to it.
This has to be something either in the code, or a problem with hosting container on AWS Lightsail. Looks like memory usage grows for 7 days, then the process restarts. Upon, restart though, all the outstanding links expire which is also a flag to me.
Can't 💯 confirm, but it looks like there may be a memory leak. I am hosting my own version of send on AWS Lightsail using their container service (essentially ECS). Memory usage is continuously increasing linearly. Running latest version of send
v3.4.5
via Docker container.