Closed lan1991 closed 6 years ago
Hi, for "FlowNet2-ft-kitti" we finetuned the entire network. We did not use any other data, only our usual augmentation as described in the paper. You're right that there are very few samples; that's also why we cannot train on KITTI from scratch. Our loss and backpropagation just ignore invalid GT pixels.
Hi, Thx for your reply! Can you provide the training network prototxt? Because the downloaded training prototxt template of FlowNet2 model has blobs like "img0_b" and "img1_b" which I don't know the exact meaning. Is the finetuning strategy for FlowNet-kitti (such as lr, iters...) the same as Solver_fine?
That prototxt file really is the one we were using to train the networks. There are multiple input blobs because we were mixing datasets at some points. Eddy Ilg wanted to upload a better usable version. I'd like to humbly suggest you contact him directly :)
thx! @nikolausmayer
HI,
Does fine-tuning on KITTI means only fine-tune the parameters of Fusion network? The KITTI dataset only provides 394 flow gt images in total, did you use additional training data when training Flownet2-KITTI? Also, how did you use the sparse gt data to train the network?
Thx!