Open Lucuz91 opened 3 years ago
Hey @Lucuz91 thanks for raising this issue. Given that you wrote "task" I will assume you're using ECS on Fargate (as you can also use EKS on Fargate, with Fargate being the compute engine). Have you seen https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/ in this context?
hey @mhausenblas
Yes, I've seen this blog post. In fact, my doubt is that the container with aws-sigv4-proxy perceives the RAM set in the task, and that it does not start the garbage collector because it believes it has more ram than it actually has.
In fact, on our other containers (Java however) we set the heap value based on the value of the task, in order to be sure that the garbage collector starts working before running out of memory (like describe in this post (https://alvinalexander.com/blog/post/java/java-xmx-xms-memory-heap-size-control/). Is there a way to do the same?
Hi @Lucuz91 ,
Did you figure out how to manage the memory on sigv4 ?
Hi @Lucuz91 ,
Did you figure out how to manage the memory on sigv4 ?
Hi @calsaviour
No, i create own container in python to sign the url to S3
I see. @Lucuz91 did you set a max memory or let the garbage collection handle the memory management?
Hi,
I am using this container in conjunction with NGINX to sign requests to S3. I am using it on a fargate task with a total of 2GB of RAM (split between nginx and this container), but the memory behavior is strange, it seems that the garbage collector does not work and after a while it dies from out of memory.
What is the recommended memory value? Is there a way to set the maximum amount of memory it can use?
Thanks