Closed grisaitis closed 8 years ago
I'm able to fix the issue by adding this before I run apt-get update
:
RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/7fa2af80.pub
I'll submit a pull request shortly with this change implemented.
Fixed the issue!
The error was caused by stale base images on my local docker host.
Re-pulling the images solved the problem, either with docker build --pull ...
or for i in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep nvidia/cuda); do docker pull $i; done
.
To clarify: this was user error. No pull request / code changes warranted.
No problem, sorry for breaking the public CUDA repo, the transition has been rough for everyone!
I got the same error with Ubuntu 16.04. How to get the public key for ubuntu 16.04? @flx42
You already asked on GitLab, and this issue has been fixed a long time ago, so you're facing a diffent issue.
I get the following error if I run
apt-get update
in one of the ubuntu 14.04-based images that includes cudnn:My Dockerfile looks like this:
and I can get the same result if I replace
nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04
with cudnn4.This error does not occur when I build from one of the ubuntu 16.04 Dockerfiles, e.g.,
nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
. I believe the reason it works there is that cudnn isn't being installed from APT, but rather some tarball that's downloaded from nvidia.