lmb-freiburg / flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/
Other
1k stars 318 forks source link

Sintel pre trained model. #127

Closed TusharNimbhorkar closed 6 years ago

TusharNimbhorkar commented 6 years ago

Hey

I was wondering how did you fine-tune flownet2 on sintel (how many image pairs?)

And also from what I understood. You took Flownet2-CSS and fine-tuned on sintel. (with which solver.prototxt?)

TusharNimbhorkar commented 6 years ago

Also Any idea about why the ram usage is exploding while doing the training.(30g+) Maybe its the problem with lmdb.

[civondrick] on https://github.com/BVLC/caffe/issues/2121#issuecomment-82995998

Also, I was wondering if LevelDB is supported for the training.

nikolausmayer commented 6 years ago

The Sintel net was finetuned using solver_S_fine.prototxt (in the models folder).

We don't use LMDB or LevelDB, so I cannot answer your second question. We use our own in-house formats.

TusharNimbhorkar commented 6 years ago

Thanks. I suppose that in-house format is not public.right?

Also, what parameters did you change in train.prototxt when finetuning on sintel.

One more thing, Is the loss given per batch? For image pair epe = loss/Batch_size?

nikolausmayer commented 6 years ago

No, it's not public.

The only parameter change is the crop size which is adjusted to each dataset (Sintel has a different aspect ratio than FlyingThings3D). Rules of thumb: (1)l argest multiple of 64 in each dimension that still fits into the image; (2) allow for some slack for augmentation.

I think the loss is averaged to "per sample".

TusharNimbhorkar commented 6 years ago

Thanks.

Are you sure about the loss? Because the loss does not reflect results from paper. Because when I test on sintel-final. Loss is around (Flownet2-CSS(20),Flownet2-Sintel(50),FLownet2-S(56),B.S = 8(for test),Test iteration 80) Although. flow_loss6 is in the region of <10 (After multiplying with 0.32)

nikolausmayer commented 6 years ago
  1. I missed that in your first post: our paper says "We fine-tuned FlowNet2 on a mixture of Sintel clean+final training data" (Sec 6.1), and "We denote the final network as FlowNet2" (Sec 5.2). So it's not FlowNet2-CSS, it's the whole "FlowNet2-CSS-ft-sd + FlowNet2-SD + fusion" stack.

  2. There are EPE and L^p_q loss elements, I think I missed that (it differs between network configurations). There should be a "fuse_flow_loss0_epe" loss layer on the output of the fusion network.

  3. I'm not sure about the batch thing. But that should be easy to test, right? Just vary the batch size and look at the numbers?

nikolausmayer commented 6 years ago

(closed due to inactivity)