TUMFTM / CameraRadarFusionNet

Apache License 2.0
407 stars 131 forks source link

multi_gpu_model error #14

Open guiyuliu opened 4 years ago

guiyuliu commented 4 years ago

I have 8 gpus on my machine and they all work well when use nvidia-smi to list but I got errors below

the keras issue is here https://github.com/keras-team/keras/issues/11644

it seems like it cannot be fixed unless I downgrade tensorflow to 1.12 but it should use CUDA9.0

I was completely confused why this work used a very high version of CUDA , but use a very low version tf1.13 ? Do you have other options ?

training_model = multi_gpu_model(model, gpus=multi_gpu) File "/root/anaconda3/envs/crfnet2/lib/python3.5/site-packages/keras/utils/multi_gpu_utils.py", line 181, in multi_gpu_model available_devices)) ValueError: To call multi_gpu_model with gpus=8, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2', '/gpu:3', '/gpu:4', '/gpu:5', '/gpu:6', '/gpu:7']. However this m achine only has: ['/cpu:0', '/xla_gpu:0', '/xla_cpu:0', '/gpu:0']. Try reducing gpus.