Open viniarck opened 1 year ago
I decided to watch the memory usage while running kytos with the e2e tests, as I thought it would be a good stress test of kytos. For me the Memory usage hovered around 200 to 300MiB.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
dbfff37e7190 kytos-end-to-end-tests-kytos-1 2.86% 323.9MiB / 3.016GiB 10.49% 124MB / 168MB 134MB / 86.8MB 271
However, it should be noted this was from an image built for e2e tests relevant to my PRs, and not the official amlight image. Ill try running e2e tests again with the amlight docker image, and see if I get any different behaviour there.
Running e2e with the amlight image in particular is getting me around 200 to 250 MiB of memory usage.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
af892da51559 kytos-end-to-end-tests-kytos-1 8.36% 243.6MiB / 3.016GiB 7.89% 86.4MB / 62.1MB 0B / 66MB 221
If anyone could provide a procedure for running in an AMLight like production scenario, I could do a deeper dive (assuming it reproduces the high memory usage).
I'm trying to run kytosd
with scalene
but the interactive terminal keeps on closing so I can't keep the program running in the foreground. I think I remember in the past suggesting we move away from the requirement of the interactive terminal while running as a foreground process, specifically for running in docker containers, as then the docker container's lifetime could be directly tied to kytos.
@Ktmi, it's a good thing that on e2e at least you're seeing around 200-250 MB of RAM used. Maybe a future step is also to see if in a long running type of exec (over a few days while still running tests) in a server it stays that way too.
Regarding the memory profiler, it might be worth trying out other ones that might still be compatible with kytosd
in foreground as it is. Other than that, you might also try to run it in the background mode. Yes, we do have an issue in our backlog to optionally run kytosd in the foreground without the embedded ipython shell https://github.com/kytos-ng/kytos/issues/104 but we'll likely not prioritize in the short term.
@italovalcy, would you also happen to have a chart of RAM usage of this container in prod? It'd be great to know if that docker stats
sample mem usage was just showing a spike (and eventually garbage collection starts doing its job) or if it indeed might be leaking. That way @Ktmi can also confirm with his findings based on local e2e tests exec and also with the memory profiler that's he's trying to use. Thanks.
Typically in development with a ring or linear,3 scenario with just a few EVCs,
kytosd
consumes 150-200 MB.However, at AmLight, on
docker ps
the container was using806.7MiB / 9.727GiB
, the docker image is based on debian and is running probably a few more processes, but it shouldn't likely have that much difference. So, this needs a bit of research, either way, it'd be great to also have a development baseline withlinear, 3
, andamlight
e2e topology runningkytosd
with a memory profile like https://github.com/bloomberg/memray for some time and then observe high allocations and map any potential issues that might be surfacing.It'll set this a
priority_medium
without a target since it's not immediately impacting, eventually, it'll get picked up probably on2023.2
scope too, we'll see.