Open AvishekMondalQC opened 2 weeks ago
The issue is related to our solvers, when using the pypi dependencies for those package you'll only have to solve them against a known python version, which comes from the conda solve. Adding the package to the conda solve results in testing much more options in the same conda solve. Resulting in one big solve instead of two serial small solves.
We'll never be able to create the same amount of memory being used for one big solve than two smaller solves but the memory concern is valid.
I'll as @baszalmstra if he has any ideas on how to implement a memory limit, as I'm unsure how that would work.
Checks
[x] I have checked that this issue has not already been reported.
[x] I have confirmed this bug exists on the latest version of pixi, using
pixi --version
.Reproducible example
With the above
pixi.toml
file, when I runpixi install
and monitor memory usage usinghtop
, I see that RAM goes beyond 5GB. But when I runpixi install
using the followingpixi.toml
file and monitor RAM usinghtop
, it does not exceed 2GBIssue description
I do not understand why there is such a discrepancy in memory usage when downloading the same packages from conda vs when getting if from pypi. How would I go about tracking the source of the discrepancy?
Additionally, is it possible to specify the maximum amount of memory or number of workers the
pixi install
process uses?Expected behavior
I would expect the memory usage of
pixi install
to be roughly the same for bothpixi.toml
files.