Closed YKZhangSEU closed 3 years ago
Astra is a bit picky about device ordering. Could you try changing CUDA_VISIBLE_DEVICES before starting anything?
Thanks for your suggestion. Yesterday I only set the CUDA_VISIBLE_DEVICES in train.py but it did not work. Then today I add an additional CUDA_VISIBLE_DEVICES setting before constructing the odl layer in model.py and it WORKS.
Good that there's a workaround. I'll close the issue.
Thanks for your suggestion. Yesterday I only set the CUDA_VISIBLE_DEVICES in train.py but it did not work. Then today I add an additional CUDA_VISIBLE_DEVICES setting before constructing the odl layer in model.py and it WORKS.
I have the same issue and I could not solve my problem with:
import os os.environ['CUDA_VISIBLE_DEVICES']="0" at the very beginning of my code! I still get astra.astra.use_cuda() false while I have all cuda drivers (10.0) and toolkits. TensorFlow (1.15) does not have any problem with GPU and I can get the cuda driver info via nvcc -V or nvidia-smi on my Ubuntu 18.04. Do you have any suggestions? I do appreciate your help!
pytorch 1.7.0 cuda 10.1 python 3.6.5 odl 1.0.0.dev0 astra 1.8
run the fbp as a pytorch layer
With the same environment configuration, I can't enable the CUDA for Titan RTX. No matter which GPU index I specify, I can only use the 1080Ti. But when I run codes without odl, CUDA for Titan RTX is OK. Then I pull out the 1080Ti and just equip with Titan RTX. The error told me that No CUDA available.