Closed brentylw closed 4 years ago
By the way, gpu_0 runs 90% while the rest gpus run no more than 30% respectively. How can we exploit the full capacity of all gpus? Thanks a million?
Sorry for the delayed response. Multi-GPU training is handled by Keras. Does it work to full capacity with other Keras models?
When set OPTIMIZER = 'SGD', it works well for cpu. But for my 4-gpu server, it reports "failed to create kernel"....... After changing to OPTIMIZER = 'Adam', 4-gpu server works well. Is it my problem or the program itself, or tensor flow version (mine is 1.8.0 and test it on 1.10 , 1.12)? Could you please check it on multiple gpu machine?