lmb-freiburg / flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/
Other
1k stars 318 forks source link

Question about the paper #173

Closed AliKafaei closed 5 years ago

AliKafaei commented 6 years ago

Dear sir, I am trying to implement Flownet Simple and I encountered an ambiguity in the paper (the paper is solid and well written but I did not undrestand a section.). my problem is in refinement layers. In these layers, upconvolved data, contracting data and prediction of the layer before are concatnated. In the paper it is written that all layers have RELU activation but with RELU activation function can we get negative displacement? Another question is that in arixe version it is written up sampling of prediction but in the other version it said up convolution, are they the same? Best, Ali PhD student Concordia University Perform Centre

nikolausmayer commented 6 years ago

The prediction layers do not have an activation. After the concatenation, there is another convolution which can make use of negative displacement values before the next activation layer.

The ReLUs in the network are "leaky ReLU" with negative slope 0.1 anyway, so negative values would not be entirely lost.

The paper may refer to "upconvolution" layers as upsampling. The FlowNetSimple's final output is half-resolution, and there we used actual bilinear upsampling to get to the full resolution. So it might refer to the same thing depending on what layer it is talking about :wink:

AliKafaei commented 6 years ago

Thanks for the support:). The point that is vague is that to concatenate the low-resolution prediction in the refinement section, upsampling alone (with no learnable parameter) is used or Convolution Transpose (sometimes called Deconvolution) is used? The upsampling does not have any learnable parameter (adding zeros and then low pass filtering) while convolution transpose and deconvolution have learnable weights

nikolausmayer commented 6 years ago

Those are "Deconvolution" layers. Unlearned upsampling layers are only used during test mode (resp. for the last half-res-to-full-res step).

AliKafaei commented 6 years ago

As you said Prediction Layers do not have any activation functions, what is the kernel size of these prediction layers?

nikolausmayer commented 6 years ago

The *deploy.prototxt* files in the models folder contain that information.

nikolausmayer commented 5 years ago

(closed due to inactivity)