Closed sutongkui closed 5 years ago
Also, check try running nvidia-smi -lms
to get a better picture of the GPU usage. There are certain stages which use less GPU (during data I/O, etc.) and if you sampled the GPU at these times you might have a low percentage (but 1% sounds too low?). Anyway, please take a screenshot and show me.
Thanks!
Thanks for your reply. You are right, gpu utility varys between 8% and 30% for my GTX1660 and gpu memory uses about (1-2G). I don't know whether the utility is normal, according my tensorflow experience, gpu usually gets almost100%. Besides, the cpu utility is high(average is 60% for i7-8700), seems lots of work is done by cpu. As to 1% use, that's because I made a stupid mistake, I read the data from win10 task manager. Obviously, that's not the real gpu utility.
Great, thanks for letting me know. I am marking this issue as closed.
Hi @sutongkui ,
Thanks for letting me know.
Yes, CUDA is supported and by default should run. Can you please provide details how to reproduce your issue (which scripts did you run, what machine are you using, etc) ?
When running train.py the
--gpu_ids
is by default 0 - which uses the first GPU (defined in base_options.py) . Setting it to -1 would use the CPU. Although in theory this code should have supported multi-gpu training I have not tried it.