ivannz / cplxmodule

Complex-valued neural networks for pytorch and Variational Dropout for real and complex layers.
MIT License
134 stars 27 forks source link

Feature suggestion: naive convolution : gauss trick #20

Open pfeatherstone opened 3 years ago

pfeatherstone commented 3 years ago

How about use this for naive convolution and reduce 4 convs down to 3

ivannz commented 3 years ago

@pfeatherstone why not) Although as the linked wiki states

There is a trade-off in that there may be some loss of precision when using floating point. So faster convolutions come at a cost.

cplxmodule currently uses cplx.conv_quick for most convolutions (non-grouped), which uses two calls to conv at the cost of extra concatenation and slicing steps and, hence, copying and memory storage.

On the other hand cplxmodule currently uses the naïve four-op implementation for cplx.inear, although i've got both the Gauss-trick and concatenation version implemented and tested as well.

Unfortunately, i did not design a convenient mechanism in cplxmodule for changing the operations' underlying kernels used in the layers. So for now the selection is hardcoded to specific implementations (linear, bilinear, and transposed conv).

ivannz commented 3 years ago

i have just pushed a commit to the master, implementing and testing the stuff. However, please, keep in mind the last paragraph of my previous response: currently you will have to manually change a couple of lines in cplxmodule/cplx.py