floydhub / dockerfiles

Deep Learning Dockerfiles
https://docs.floydhub.com/guides/environments/
Apache License 2.0
156 stars 57 forks source link

TF GPU compute #28

Open vlad17 opened 7 years ago

vlad17 commented 7 years ago

The python 3 GPU docker file specifies ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7, which is the compute capability for the K80s. AWS also has g3's M80 cards, which have compute capabilities 5.2. Could that line be changed to ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2 so that the TF that's built is optimized for all AWS GPU offerings?

See nvidia for listing

houqp commented 7 years ago

Sure, I will update this on the next release :)

vlad17 commented 6 years ago

thanks! i've been seeing a related problem now:

2018-02-04 22:39:16.960722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1093] Ignoring visible gpu device (device: 0, name: Tesla M60, pci bus id: 0000:00:1b.0, compute capability: 5.2) with Cuda compute capability 5.2. The minimum required Cuda capability is 7.0.

This stems from the same issue (on the dl/tensorflow/1.4.0/Dockerfile-py3.gpu.cuda9cudnn7_aws dockerfile)

ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,7.0 should perhaps be ENV TF_CUDA_COMPUTE_CAPABILITIES=3.7,5.2,7.0?

ReDeiPirati commented 4 years ago

Hi @vlad17, sorry for the late reply,

I've just labeled this issue as a feature request, we will add this in the next release.