Closed simon1727 closed 5 years ago
Hi,
Yes, technically I have to admit that it's not completely the same. Instead of reproducing the Caffe source code into pytorch, we used our own version of photometric augmentation, which turned out to make no big difference when comparing the EPE of our baselines (FlowNetS, PWC-Net) in Pytorch with the original baselines in Caffe.
Yes all parameters in the code are exactly the one used during training and fine-tuning.
Got it, thanks!
Hi!
In the paper, it says that the same geometric and photometric augmentations are implemented as that of Flownet2. However, I noticed that same parameters are distinct comparing to that in https://github.com/lmb-freiburg/flownet2, and an additional Hue transform is implemented, while the 'eigen vector chromatic' transform is not implemented. Are those augmentations and corresponding parameters in this code what were applied to the data during training and finetuning?
Thanks!