Open JohannesWiesner opened 1 year ago
Note: It is currently possible to post-hoc activate the custom environment by executing source activate csp
once the container is running. It is not possible to activate the environment in the running container using conda activate csp
. This will produce:
csp@5b5b1344459e:/$ conda activate csp
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
2nd note: Even after adding
--env LD_LIBRARY_PATH="/opt/miniconda-latest/envs/csp:$LD_LIBRARY_PATH" \
--run-bash "source activate csp"
to the latest script it still does not work. Which makes me think that it must have something to do with the different neurodocker versions (kaczmarj/neurodocker:0.7.0
vs. repronim/neurodocker:0.9.4
) or with the fact that the older version still had the activate=True
option which somehow did the job. So the following lines will also not activate any env:
generate_docker() {
docker run -i --rm repronim/neurodocker:0.9.4 generate docker \
--base-image neurodebian:stretch-non-free \
--arg DEBIAN_FRONTEND='noninteractive' \
--pkg-manager apt \
--install opts="--quiet" \
gcc \
g++ \
octave \
--spm12 \
version=r7771 \
--freesurfer \
version=7.1.1 \
--copy $conda_yml_file /tmp/ \
--miniconda \
version=latest \
yaml_file=/tmp/$conda_yml_file \
env_name="csp" \
--env LD_LIBRARY_PATH="/opt/miniconda-latest/envs/csp:$LD_LIBRARY_PATH" \
--run-bash "source activate csp" \
--user csp \
--run 'mkdir /home/csp/data && chmod 777 /home/csp/data && chmod a+s /home/csp/data' \
--run 'mkdir /home/csp/output && chmod 777 /home/csp/output && chmod a+s /home/csp/output' \
--run 'mkdir /home/csp/code && chmod 777 /home/csp/code && chmod a+s /home/csp/code' \
--run 'mkdir /home/csp/.jupyter && echo c.NotebookApp.ip = \"0.0.0.0\" > home/csp/.jupyter/jupyter_notebook_config.py'
}
3rd note: After checking the differences in Dockerfile when executing the old script with activate=True
and without it, I found out that activate=True
is responsible for adding these two lines of code to the Dockerfile which apparently do the magic:
- && sync \
- && sed -i '$isource activate csp' $ND_ENTRYPOINT
Why is activate=True
not available anymore in newer neurodocker versions? Could we re-introduce it?
@kaczmarj : I found a workaround for now by using --run 'echo source activate name_of_the_env >> /home/use_name/.bashrc'
.
Example (make sure that conda_yml_file
is a path to a .yml
file of your choice):
generate_docker() {
docker run -i --rm repronim/neurodocker:0.9.4 generate docker \
--base-image neurodebian:stretch-non-free \
--yes \
--pkg-manager apt \
--install opts="--quiet" \
gcc \
g++ \
octave \
--spm12 \
version=r7771 \
--freesurfer \
version=7.1.1 \
--copy $conda_yml_file /tmp/ \
--miniconda \
version=latest \
yaml_file=/tmp/$conda_yml_file \
env_name=csp \
--user csp \
--run 'mkdir /home/csp/data && chmod 777 /home/csp/data && chmod a+s /home/csp/data' \
--run 'mkdir /home/csp/output && chmod 777 /home/csp/output && chmod a+s /home/csp/output' \
--run 'mkdir /home/csp/code && chmod 777 /home/csp/code && chmod a+s /home/csp/code' \
--run 'mkdir /home/csp/.jupyter && echo c.NotebookApp.ip = \"0.0.0.0\" > home/csp/.jupyter/jupyter_notebook_config.py' \
--workdir /home/csp/code \
--run 'echo source activate csp >> /home/csp/.bashrc'
}
I will close #16 in our repository for now (though you might find useful links there). I currently just follow this stackoverflow post by user merv who in general seems to know a lot of stuff about conda/Docker (helped me already a couple of times in the past on SO).
General takeaways:
conda activate
(although generally recommended over source activate
) doesn't really do what we want../bashrc
or .profile
filesA possible solution for this issue:
Reintroduce --activate=true
+ introduce a way for users to specify which environment they would like to activate for which users (could be achieved by providing a tuple? Something like --activate "csp" "john.doe jane.doe"?)
@kaczmarj @satra - looks like activate=True
was removed in #378 . Do you see right now any reason to not bring it back?
we could bring it back but there are some sharp edges. we would have to test that we are activating the environment correctly. the conda
command will not be available in a /bin/sh
shell in the docker image during build, so conda activate ENV
would not work on its own. also the shell is started fresh in every RUN
instruction, so activating the environment would not persist to the next RUN
layer.
I guess the only way to achieve this to either hardcode it to some configuration file like .bashrc
or /etc/profile.d/activate_conda.sh
(which would be more elegant because this ensures that the environment would be activated for all users) or to use an ENTRYPOINT so the first thing that happens, when the container is executed, is that the conda environment is activated. This is tricky though because one has to ensure that the container keeps running, see: https://stackoverflow.com/questions/41741895/docker-unable-to-start-an-interactive-shell-if-the-image-has-an-entry-script
I already tried the first approach which I find more elegant but somehow this doesn't work:
--run 'touch /etc/profile.d/conda.sh' \ --run "echo '#!/bin/bash' >> /etc/profile.d/conda.sh" \ --run 'echo source activate csp >> /etc/profile.d/conda.sh' \
Using an ENTRYPOINT would also mean that you cannot use it for something else then. Afaik there can only be one ENTRYPOINT for each Dockerfile.
Note: One advantage of using an entrypoint would be that the conda environment could then also be used when you use docker run -u $(id -u):$(id -g)
. Passing the user flag is necessary when you don't have root privileges on your host system. In this case you would want to use the -u
flag to make sure that the host user can manipulate / delete files that were created by a container process. When you use this flag, you don't have home directory inside the container, so there's also no .bashrc
file (or any of the other options) that you could write conda activate env_name
into. However, so far I coulnd't create a bash script that can activate a conda environment. See https://github.com/conda/conda/issues/7980#issuecomment-1523630369
This is tricky though because one has to ensure that the container keeps running, see: https://stackoverflow.com/questions/41741895/docker-unable-to-start-an-interactive-shell-if-the-image-has-an-entry-script
This can be solved by adding /usr/bin/env bash
at the end of the bash script (I searched through an old version of neurodocker and found this solution). So the structure of the script should be:
#!/bin/bash
#. . . Do whatever you want here . . .
# we want the container to keep running after running the code above so we run this
/usr/bin/env bash
Note: It is currently possible to post-hoc activate the custom environment by executing source activate csp once the container is running. It is not possible to activate the environment in the running container using conda activate csp.
This is happening because conda init
was ran as root, not as csp, during docker build. Running --miniconda
recipe as non-root user used to be possible, but not after the refactor.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
This issue is stale because it has been open for 30 days with no activity.
We would like to create a new conda environment using a
.yml
file. With our current script we end up with a container, where neither the base nor our custom "yml"-environment seem to be activated. E.g.:In old neurodocker versions, there was the
activate=True
option. I always thought this option did exactly that (activating a custom environment). On top of that, I could come up with a script (very close to an old version of generate_docker.sh from the nipype_tutorial) that uses 1.)activate=True
and 2.)--env LD_LIBRARY_PATH="/opt/miniconda-latest/envs/csp:$LD_LIBRARY_PATH"
and 3.)--run-bash "source activate csp"
. Here the activation seem to have worked (I am just not sure which of the three options / or their combination is responsible for that):Pasting both the old/new bash-scripts/Dockerfiles here:
Latest script:
Note that a
.yml
file has to be present in the directory that the script is in. You can execute this script by runningbash generate_dockerfile.sh test_env.yml
. For example usetest_env.yml
as provided in our repoLatest Dockerfile:
Old script uses
kaczmarj/neurodocker:0.7.0
:Old Dockerfile: