lliuz / ARFlow

The official PyTorch implementation of the paper "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation".
MIT License
251 stars 50 forks source link

Training+finetuning configurations #14

Closed gallif closed 3 years ago

gallif commented 3 years ago

Hi, first of all great work and fantastic code! I'm trying to recreate your reported results on the Sintel Training:

  1. Did you train and evaluate using the complete Sintel train dataset?
  2. I noticed that occlusion transform (run_ot) is set to false in the ar config file. Was it used during finetuning?
  3. Are there other parameters I should consider?

Thanks!

lliuz commented 3 years ago
  1. Yes, I did. The results in Table 1 were trained on the complete Sintel train dataset.
  2. Since OT significantly increases training time, I did not use it in most of ablations, but it can still bring performance gains(refer to Table 4). The results in Table 1 used all kinds of transformations.
  3. You can reproduce the results just use the default parameters.
gallif commented 3 years ago

Thank you for the reply. I will do as you suggested.