In the original flownet implementation there is batch normalization after every conv layer. In theFloatNetC.py, I don't find any batch_norm layer. Any specific reason?
Okay so I was looking at NVIDIA's Pytorch implementation of Flownet 2.0. In that they had used batch_norm so I assumed the case with Flownet 1.0, but it's not the case.
In the original flownet implementation there is batch normalization after every conv layer. In the
FloatNetC.py
, I don't find anybatch_norm
layer. Any specific reason?