mamba-org / mamba

The Fast Cross-Platform Package Manager
https://mamba.readthedocs.io
BSD 3-Clause "New" or "Revised" License
6.89k stars 353 forks source link

Implement pixi-style shell activation #2892

Open jonashaag opened 1 year ago

jonashaag commented 1 year ago

See https://github.com/prefix-dev/pixi/pull/316

Would fix:

Would require fixing:

iamthebot commented 1 year ago

Hmm would this work in docker though (where multiple shells may be invoked)? The current approach effectively adds an init block in .bashrc/.zshrc which sources mamba.sh (and conda.sh) which calls a hook command and sets env vars. These env vars will be effectively the same across shell invocations if no activate command takes place (eg; default to the base env).

It would be neat to have mamba still activate to a default (eg; base) env when started. Problem is that would be blocking if the shell is run in a pty right? Eg; if mamba activate is called in a .zshrc or .zprofile.

Also worth considering any possible performance implications of running in a PTY (for I/O buffers especially).

I'm all for cleaning up the shell madness that's been the status quo, but curious if these questions can be addressed.

Another option is to take this approach for mamba used in an interactive way and make it easier to run things in mamba envs for non-interactive use (eg; return the run command so it can be evaluated in the current shell-- this would be very useful when running production workloads in a container).

jonashaag commented 1 year ago

cc @wolfv @baszalmstra @ruben-arts any input on these questions?

(Sorry for pinging all of you, I wasn't sure who's most knowledgeable about this feature)

wolfv commented 1 year ago

Some notes:

iamthebot commented 1 year ago

pixi always uses cmd.exe or bash for activation in a subprocess, then prints the environment variables after activation. We collect the variables and create a new script for the "target interpreter" (e.g. xonsh / fish / ...) and then source that (this script only contains the environment variables)

@wolfv does that mean that eg; zsh/bash functions defined in other scripts that are sourced wouldn't work? As you pointed out later things being sourced in the shell (even conditionally) may not work with this approach either. Worth thinking about. This is not a drawback to using a pty though and only a drawback of the approach of compiling env variables into a shell script that's sourced. I think zsh completions would indeed not work w/ this approach either since they rely on a different mechanism. It's possible to just handle these edge cases individually though.

For the PTY stuff, it took a while to "get it right" - we had versions with relatively high CPU usage. We used htop for the "stress-test" as it does random drawing / screen refreshes ever so often. Poetry uses the same approach. We tried a few alternatives but in the end resort to "the unix way" of doing it with select calls that wait on the two pipes (either stdin from the console or stdout from the pseudo-terminal). If either of them are "ready" we read the buffer and forward it. It was not easy to get to but turned out simpler as I thought because I wasn't very familiar with Unix pipes tbh.

This seems reasonable. We're doing the same thing to run mamba interactively as a subprocess (in a pty) too. Unfortunately we had to do this in Python and that involved forking the pty code from stdlib and improving on it b/c it's had open issues for 10+ years at this point (eg; doesn't handle window resizing).

For the container use case - haven't really tried it yet. One could also imagine a bunch of other ways, e.g. writing out the activation script at docker build time and sourcing that without micromamba being involved

Maybe I'm a bit more opinionated on this one as an end-user that uses conda+mamba heavily in containers. The existing shell init based approach is really painful in containers b/c you either:

1) Assume a different user when running the workload in the container so the .bashrc/.zshrc won't even be loaded 2) Don't run the command in a shell at all 3) Want to invoke the command in a given environment using some other kind of runner (eg; watchdog).

So what people end up doing is figuring out the path to the python interpreter and invoking the interpreter directly. This works for most cases but won't work if you need to eg; also update your LD_LIBRARY_PATH. A practical time when this matters is if your environment has cuda in it. Same issue with R environments where activate sets a bunch of other env vars.

The "dream" would be, given a command and an environment, for mamba to directly output the env vars + run command.

Then I could do something like: $(mamba get-run-command -n <env> <command>). No pty, etc. to add even miniscule overheads once the command is running (these could matter for some use cases-- eg; ML model serving).

jonashaag commented 1 year ago

For scripted (non interactive) use cases why would you want to use activation in the first place? Why not use run?