lmb-freiburg / flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/
Other
1k stars 318 forks source link

Is the caffe model FlowNet2-S the same as the model FlowNets 1.0? #95

Closed chenchr closed 6 years ago

chenchr commented 6 years ago

Hello. I try to train flownet2.0 from scratch. I need flownetS1.0 and flownetC1.0 and I am not sure if the caffe model FlowNet2-S is ther same as the model FlowNets 1.0? Thanks! Besides, there are three solver prototxt, i.e fine, long and short, the max iteration and learning rate setting is different, which one should I choose during different stage of the training of flownet2.0?

nikolausmayer commented 6 years ago

Hi, FlowNet2-S is the same as FlowNetS1.0 FlowNet2-C is the same as FlowNetC1.0

The solvers are explained in the main paper. For our best networks, we pretrain using "long" and then finetune using "fine".

chenchr commented 6 years ago

Thanks for you reply. Besides, I want to ask a question about the trainning of flownet, as stated in the paper, data augmentation is important, without which will lead to overfitting. To date, as I can't run the code due to cudnn error, I implement flownets using pytorch and training on flyingchairs without data augmentation. I found that the training epe and testing epe after 600k iteration is about 1.43 and 2.16 respectively, which is lower than 2,7 reported in the 2015 iccv paper. Therefore, I am not sure if it's resonable or I have some implementation error. Did this situation occurred to you when you trained without data augmentation? Thank you.

nikolausmayer commented 6 years ago

I don't have such an evaluation at hand, but I would guess that your network is overfitting to the color and shape distribution in FlyingChairs. What sort of EPE values do you get on e.g. Sintel?

chenchr commented 6 years ago

I use the flying_chairs official split file for train set and validation set. EPE is compute as the average L2 norm of error between prediction and goundtruth.

nikolausmayer commented 6 years ago

Well, the train/val split is still from the same dataset, so without augmentation there will be overfitting, I just can't say how bad. Regarding EPE, I meant to ask whether you get similar scores on Sintel as reported in the FlowNet paper :)

chenchr commented 6 years ago

I just download flyingchairs and have not download Sintel yet, I will test on it and give you feedback. Thanks for your help.

chenchr commented 6 years ago

Hello. I test on Sintel, it is overfitting as the epe is larger than result reported in paper. Besides, I want to ask about the augmentation parameter. The spatial augmentation and chromatic augmentation is obvious, I've read the code and know how it works . However, the chromatic_eigen_augmentation it a little bit confusing, in the train.prototxt, I found it includes add salt which maybe salt-and-pepper noise, however about the other noise such as lmult_pow, lmult_mult, lmult_add etc. , I still can not get the point. As I want to replicate using pytorch's built-in functions in a easier way. Could you please help me by some explaination? Thank you very much.

chenchr commented 6 years ago

For most of the param in train.parototxt, maybe I can consider the param with distribution params like:

ladd_add {
      rand_type: "gaussian_bernoulli"
      exp: false
      mean: 0
      spread: 0.04
      prob: 1.0
    }

as per-pixel augmentation, but what is the meaning of `chromatic_eigvec ?

chromatic_eigvec: 0.51
    chromatic_eigvec: 0.56
    chromatic_eigvec: 0.65
    chromatic_eigvec: 0.79
    chromatic_eigvec: 0.01
    chromatic_eigvec: -0.62
    chromatic_eigvec: 0.35
    chromatic_eigvec: -0.83
    chromatic_eigvec: 0.44

Another question is about why using different distribution on the first and last two channels? In the code:

if (aug.has_ladd_pow())
        coeff.set_pow_nomean0(caffe_rng_generate<Dtype,float>(aug.ladd_pow(), discount_coeff));
    if (aug.has_col_pow()) {
        coeff.set_pow_nomean1(caffe_rng_generate<Dtype,float>(aug.col_pow(), discount_coeff));
        coeff.set_pow_nomean2(caffe_rng_generate<Dtype,float>(aug.col_pow(), discount_coeff));
    }

one is uniform_bernoulli distribution while the other two is gauss_bernoulli distribution, is it some conventional operation of some image processing method?

nikolausmayer commented 6 years ago

The chromatic eigenvectors are precomputed for RGB-space, I think. They were used in the first FlowNet paper, along with those peculiar distributions. I recommend asking Alexey Dosovitskiy; he should be able to tell you more about how exactly they work :)

chenchr commented 6 years ago

Thanks for your reply. I implement some rgb value augmentation except the chromatic eigenvector augmentation, and the trained model's epe on Sintel has reduce to about 4.6.

nikolausmayer commented 6 years ago

(issue seems to have been resolved; reply to reopen if the issue persists)

yyrkoon27 commented 6 years ago

Sir,

For our best networks, we pretrain using "long" and then finetune using "fine".

Therefore, your pretrained model for flownet2-sd, which is trained with ChairsSDHom and doesn't need finetuning, is done with the "long" solver. Is my understanding correct?

Thank you very much :-)

nikolausmayer commented 6 years ago

@yyrkoon27 Yes, FlowNet2-SD is trained only on ChairsSDHom and only using the long schedule.

xianshunw commented 5 years ago

Thanks for your reply. I implement some rgb value augmentation except the chromatic eigenvector augmentation, and the trained model's epe on Sintel has reduce to about 4.6.

@chenchr Which dataset did you train to get this result?