NVIDIA / flownet2-pytorch

Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
Other
3.15k stars 740 forks source link

Resample2D and ChannelNorm packages #3

Open ClementPinard opened 6 years ago

ClementPinard commented 6 years ago

Hello,

When reading your packages, It seemed to me that two of your packages were already implemented in the standard pytoch library specifically, is channelnorm different than computing torch.norm(input, p=2, dim=1, keepdim=True) ? (maybe it's faster ?)

And is resample2Ddifferent from grid_sample ? I think this one is a recent feature though (from 0.2.0). It obviously doesn't take flow as input, but it could be used with a mixture of affine_grid (just put a zero affine_matrix) and easy normalization. It might be a little slower though.

Thanks !

fitsumreda commented 6 years ago

Hi there,

For some reason torch.norm() leads to NaNs during backpass. So, I had to implement a cuda kernel for channelnorm.

For resample2d, it is possible to use grid_sample if you are learning absolute sampling indices. Since optical flows learn pixel displacements, you'll need to create a sampling grid with the same size as the flow map, and add this to the flow-map before using grid_sample. Also, PyTorch currently doesn't have a direct way of creating a grid. Several functions need to be used to create the grid, and this grid tensor need to be kept around, for every flow-map spatial size, which makes the whole operation suboptimal.

So, I implemented a resample2d custom layer that takes care of these.

Thanks!

sniklaus commented 6 years ago

I have ported "Optical Flow Estimation using a Spatial Pyramid Network" without a custom resample2d despite learning pixel displacements. If you are curious as to how I used grid_sample, feel free to have a look: https://github.com/sniklaus/pytorch-spynet

Specifically: https://github.com/sniklaus/pytorch-spynet/blob/master/run.py#L125

fitsumreda commented 6 years ago

@sniklaus part of the reason we used the resample2d is that it was implemented before pytorch 0.2 was released. It also makes code a little clearer as it can be used just like any other pytorch layer

sniklaus commented 6 years ago

I initially wrote my own implementation as well. My reason for switching is that the official implementation is tested more thoroughly. Anyways, huge thanks for putting this out there!