Open whlavina opened 7 months ago
He @whlavina,
Thank you for this informative write-up, we've noticed some more interest from the HPC community which we love.
.pixi
folders in the projects. dependabot
or renovate
on the projects could help in the future to update the projects which intern automatically updates the environments. And in combination with the previously mentioned "distributions" you could force users into specific version of accepted dependencies. To force the distribution onto all environments the use of the coming mirror functionality might help: https://github.com/prefix-dev/pixi/pull/931.These are some really organization specific requests, as mentioned we are looking for partners to design with. If you would be open for a call we would happily discuss a partnership where we could help each other. Email us at hi@prefix.dev. Or join our Discord and send a personal message to me.
Thank you for the prompt and optimistic reply! I hope you don't mind that I edited my comment to add an item (7) for logging. :-)
Let me discuss within our organization a little more before the next step of reaching out to your team: I opened this issue given clear blockers that I saw for our adoption of Pixi, and your reply will help me promote the idea and gain support.
Great, reacted to (7) in my initial reaction for completeness. Looking forward to your next steps ;). If you need anymore material or answers to questions that come up, let us know!
Hi, another related use case. We generally provide centrally managed environments which 99% of users use by default (stored on a shared filesystem).
pixi
seems to make strong assumptions about the conda environment living inside of a project, which is not how we prefer to install our environments. Is there anyway that we can use pixi, or are we stuck on mamba? Is there a way to "activate" one of the environments, so that stuff just works without having to do pixi run
?
Hi, another related use case. We generally provide centrally managed environments which 99% of users use by default (stored on a shared filesystem).
pixi
seems to make strong assumptions about the conda environment living inside of a project, which is not how we prefer to install our environments. Is there anyway that we can use pixi, or are we stuck on mamba? Is there a way to "activate" one of the environments, so that stuff just works without having to dopixi run
?
I'm not sure about the use-case exactly but given a pixi.toml
. You can activate a shell using pixi shell
or pixi shell -e <env>
. When navigating to other directories, even once containing another pixi.toml
the original should be respected. This even works for tasks, although in the cases that you are using an activated shell in a directory with a different pixi manifest you will get a warning.
It's simple to add a command that uses this feature with the --manifest-path
to add a kind-of activation that you are used to. However, we do want to support a different workflow with pixi, so if you want the more vanilla conda experience, conda or mamba are your best bet!
Problem description
I work in Bioinformatics at the federal government, where we have need to supporting a large staff of scientific researchers that use High-Performance Computing (HPC) in a regulatory setting with constraints regarding Application Security. I think Pixi is very close to providing a solution that could replace existing tools and practices for making scientific software available to the userbase.
I would love to see support for a common environment with curated, centrally managed dependencies shared on a multi-user system. The requirements as I see them in our organization:
Giving some ideas for implementation:
I encourage looking at the solutions offered by these 2 tools, common in HPC environments:
For another take on the use case, see: