Closed zhanghaozhou closed 5 years ago
I don't know if our Caffe version supports multi-gpu training. We never used it with Caffe. Just try it? :wink:
I would say it does not matter. It is ok that your network turns out a little bit better or worse than our numbers (we only trained each network once). Augmentation randomness and the order of training data always lead to training variance. It matters if you want to compare fine differences in architecture or augmentation or data -- then you should fix all random seeds in the network.
OK I will test. Thank you for your response:)
(closed due to inactivity)
@nikolausmayer Hi Nikolaus Mayer,
I am trying to reproduce the results in the paper using your code. I have two questions about the training period.
Is it possible to use multi gpu while training by adding "--gpu 0,1"?
If I did not misunderstand, the data augmentation of each time is different. As a result, if my training period stop and I restart it by using "--snapshot ....caffemodel", it led to a training result which is a little bit worse than what you provided in your paper. I wonder if this is really a question who maters. If yes, can I avoid this issue?
Thank you for your response and I wish you a good week.
Best regards, Haozhou ZHANG