lmb-freiburg / flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/
Other
1k stars 318 forks source link

Multi gpu for training and snapshot #202

Closed zhanghaozhou closed 5 years ago

zhanghaozhou commented 5 years ago

@nikolausmayer Hi Nikolaus Mayer,

I am trying to reproduce the results in the paper using your code. I have two questions about the training period.

  1. Is it possible to use multi gpu while training by adding "--gpu 0,1"?

  2. If I did not misunderstand, the data augmentation of each time is different. As a result, if my training period stop and I restart it by using "--snapshot ....caffemodel", it led to a training result which is a little bit worse than what you provided in your paper. I wonder if this is really a question who maters. If yes, can I avoid this issue?

Thank you for your response and I wish you a good week.

Best regards, Haozhou ZHANG

nikolausmayer commented 5 years ago
  1. I don't know if our Caffe version supports multi-gpu training. We never used it with Caffe. Just try it? :wink:

  2. I would say it does not matter. It is ok that your network turns out a little bit better or worse than our numbers (we only trained each network once). Augmentation randomness and the order of training data always lead to training variance. It matters if you want to compare fine differences in architecture or augmentation or data -- then you should fix all random seeds in the network.

zhanghaozhou commented 5 years ago

OK I will test. Thank you for your response:)

nikolausmayer commented 5 years ago

(closed due to inactivity)