Open jmcphers opened 1 month ago
From an Instruments analysis, we've got:
About 300mb taken up by the virtual document generations for namespaces that don't have srcrefs: https://github.com/posit-dev/ark/pull/251.
We should investigate whether we need to keep this much memory, we might be leaking unneeded bits. I remember having to deal with leaks through globals managed by the compiler, and we are recompiling functions, so we should keep that in mind. https://github.com/r-lib/memtools might come in handy.
2mb allocs for each thread. We should try and reduce them, for instance by using a single tokio executor for the whole process.
A curious single 64mb alloc in zmq_bind()
via getaddrinfo()
. It seems a freeaddrinfo()
is missing (either due to the way we're calling/managing zmq or because of a bug in zmq).
A big chunk of that is just from forcing lazy bindings of R packages.
Using:
lapply(rlang::ns_registry_env(), \(x) eapply(x, force))
we see about 100mb just for base packages and rlang.
System details:
Positron and OS details:
Interpreter details:
Any R version.
Describe the issue:
The
ark
process uses a lot of memory.Steps to reproduce the issue:
Just start Positron with R enabled, then use the operating system's Activity Monitor to check memory usage for the
ark
process running under Positron. On a cold boot, with no R objects in memory, ark currently uses almost 400mb:Ark's equivalent in RStudio is
rsession
. In a comparable environment, it's only 50mb.Because we start one
ark
process per R session and it's easy to start several sessions, this can cause Positron to use a lot of memory quickly. We should investigate whether this usage is unavoidable.Expected or desired behavior:
ark
should use less memory, or we should understand why it has to use so much.