Closed JohannesWiesner closed 1 year ago
Hello @JohannesWiesner
I tried your example on Ubuntu 20.04 and I couldn't reproduce the error. I did come across it in the past though, that's why the SPM Dockerfile disables the certificate verification:
https://github.com/spm/spm-docker/blob/master/matlab/Dockerfile#L34
You could do the same here by adding a --insecure
flag but it would be better to understand what is causing the issue in some cases. Could it be that the certificate verification will be made on the host if it failed from the container? Does it make a difference for you if you add ca-certificates
in the list of packages to install in the Dockerfile?
@gllmflndn With using --insecure
and installing ca-certificates
it seemed to have worked...
Could it be that the certificate verification will be made on the host if it failed from the container?
I really couldn't tell, lacking the knowledge here...Could it have something to do with the security settings of my PC and/or pre-installed AntiVirus Software?
hmm ca-certificates
is installed in the container... I also cannot reproduce the curl error.
to add --insecure
to the spm download, one can use the following:
docker run --rm kaczmarj/neurodocker:0.7.0 generate docker \
--base=debian:stretch \
--pkg-manager=apt \
--spm12 version=r7771 method=binaries curl_opts="--insecure"
hmm
ca-certificates
is installed in the container... I also cannot reproduce the curl error.to add
--insecure
to the spm download, one can use the following:docker run --rm kaczmarj/neurodocker:0.7.0 generate docker \ --base=debian:stretch \ --pkg-manager=apt \ --spm12 version=r7771 method=binaries curl_opts="--insecure"
I tried that as well instead of manually editing the resulting Dockerfile
, works equally well!
Still a bit mysterious but curl_opts="--insecure"
is a convenient workaround without having to modify the Dockerfile.
Are you behind a proxy? What is returned if you use curl_opts="-v"
?
Strange, I executed the EXACT same script from my first post today:
#!/bin/bash
set -e
# Generate Dockerfile
generate_docker() {
docker run --rm kaczmarj/neurodocker:0.7.0 generate docker \
--base=debian:stretch --pkg-manager=apt \
--spm12 version=r7771 method=binaries
}
generate_docker > Dockerfile
and could successfully build an image from the resulting Dockerfile. Yes it might have something to do with a proxy...I might have been in my institute the last time I tried it (cannot remember unfortunately). Feel free to close this issue, not sure if it should stay open.
@kaczmarj @gllmflndn :
Coming back to this problem after more than a year, I now know it definitely has something to do with my corporate firewall. When you are not logged in to the corporates network (e.g. working remotely) everything works fine, but as soon as you are at work and try to run docker build
with the Dockerfile
generated by neurodocker
we run into all sorts of SSL-errors. We had the same problem with conda
and pip
in a bare metal installation of miniconda which we could solve by providing our corporates CA certificate:
conda config --set ssl_verify /home/johannes.wiesner/work/certs/certificates.pem
pip config set global.cert /home/johannes.wiesner/work/certs/certificates.pem
The question is, if something like this is also possible with Docker. Setting curl_opts="--insecure"
is one option, but SSl-errors are not restricted to SPM12 we also get them with super simple Dockerfiles like:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y git
which produces:
> [2/2] RUN apt-get update && apt-get install -y git:
#5 0.541 Err:1 http://archive.ubuntu.com/ubuntu jammy InRelease
#5 0.541 403 Forbidden [IP: 185.125.190.39 80]
#5 0.566 Err:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
#5 0.566 403 Forbidden [IP: 185.125.190.39 80]
#5 0.600 Err:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
#5 0.600 403 Forbidden [IP: 185.125.190.39 80]
#5 0.605 Err:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
#5 0.605 403 Forbidden [IP: 91.189.91.39 80]
So the question is, if it would be preferrable to have a more general solution instead of providing "insecure"
options for each software. The question is also, if we are not the only ones? If not, has someone a solution for that, and would it make sense to document it in the neurodocker docs? For example solutions for this problem might be found here:
https://docs.docker.com/engine/security/protect-access/ https://github.com/docker/machine/issues/1880
There could be three options for this I guess:
1.) Before using neurodocker: Can we somehow pass the certificate to our local docker installation?
2.) While using neurodocker: Could we solve the problem by passing the certificate to neurodocker (e.g. docker run -i --rm -v /home/johannes.wiesner/work/it/certificates.pem:/etc/ssl/certs/certificates.pem repronim/neurodocker:0.9.4 generate docker
?)
3.) After using neurodocker: Could we solve the problem by passing the certificate to docker build
after neurodocker ran?
@JohannesWiesner - have you tried the three options? option 2 seems like a good way to go if it works.
We already asked IT about options 1 and 3, let's see if they can come up with a solution (unfortunately this is way too IT-ish for me, so I don't understand anything from the Docker docs). I've tried the 2nd option, but this did not help. And now that I think about it, this idea doesn't make sense, because of course, the generation of the Dockerfile stays the same (with or without mounting the path to the certificates.pem
file). The second option should be re-written as:
2.) Include a neurodocker command that let's users incorporate a certificates.pem
file into the Dockerfile generated by neurodocker
you're right, it wouldn't work because the problem happens during the build, not during runtime.
could you try adding a --copy
instruction to copy your certificate into the docker image? toward the beginning of the dockerfile, before any installation of software. first, copy your certificate into your current working directory because that will make it easier to copy into the docker image. if you want to edit a dockerfile directly, add this:
COPY certificates.pem /etc/ssl/certs/
just be aware the resulting docker image will have your certificate in it.
@kaczmarj : You can close this issue, as I don't think there's anything on your side that you can do. We could solve it at my institution by unsetting the ssl deep inspection for my machine. Specifically, this SPM-line created problems:
curl -fL -o /tmp/spm12.zip https://www.fil.ion.ucl.ac.uk/spm/download/restricted/utopia/previous/spm12_r7771_R2010a.zip
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
However, other curl
commands generated by neurodocker worked completely fine:
curl -fL -o ./MCRInstaller.bin https://dl.dropbox.com/s/zz6me0c3v4yq5fd/MCR_R2010a_glnxa64_installer.bin
curl -fL https://surfer.nmr.mgh.harvard.edu/pub/dist/freesurfer/7.1.1/freesurfer-linux-centos6_x86_64-7.1.1.tar.gz
curl -fsSL -o "$conda_installer" https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
One other solution would be to get in contact with the SPM-peepz, because apparently, their server settings seem to create the issue?
I created a bash script to run this SPM12 example from Neurodocker. When trying to to build the resulting Dockerfile using
docker build .
, I am apparently running into a SSL certificate problem. Is there any way to solve this? Probably related to this stackoverflow thread?Here's the bash-script to create the Dockerfile:
And here's the console output from
docker build .
, with the error message at the end: