Closed ptriantd closed 7 years ago
This was to comply with the original caffe code. This is actually equivalent to getting a learning rate 20 times as high, so I don't think it's very useful, but pretrained models from caffe output flow/20 so the training code is designed to make the network learn to output this exact amont, for compatibility purpose, with applications that were already using flownet from caffe
I noticed that in donkeys.lua you are dividing the labels by 20. Could you explain why you did that?