v0lta / PyTorch-Wavelet-Toolbox

Differentiable fast wavelet transforms in PyTorch with GPU support.
https://pytorch-wavelet-toolbox.readthedocs.io
European Union Public License 1.2
279 stars 36 forks source link

Incorporate learnable wavelet into `ptwt.nn` submodule #79

Closed cthoyt closed 8 months ago

cthoyt commented 8 months ago

this example is great: https://github.com/v0lta/PyTorch-Wavelet-Toolbox/blob/main/examples/network_compression/wavelet_linear.py. Since it corresponds to a published architecture, why not incorporate it directly into the toolbox?

v0lta commented 8 months ago

This is a nice idea!

v0lta commented 8 months ago

But, for the moment wavelet learning is implemented with soft constraints, which means that the optimizer can deviate significantly from proper wavelets. If not monitored properly this can lead to very strange results. We should have warnings in the documentation to make sure people understand, that we are in a less established territory. Where things might change in the future.

v0lta commented 8 months ago

I have given this more thought and think we should not do this with the current layer, the reason being that the approach is inspired by https://openaccess.thecvf.com/content_iccv_2015/papers/Yang_Deep_Fried_Convnets_ICCV_2015_paper.pdf and the layer is compressed, that is it works in the two fully-connected classifier setting that was popular until resnets got rid of these two layers completely. I am not sure the layer formulation we have there is relevant for users today.

It does, however, make sense as an example, where for people to learn about one way an adaptive wavelet layer could work. I don't think this specific layout applies to modern networks, so I don't think packaging is necessarily beneficial.

v0lta commented 8 months ago

We could, however, call the learnable_code module nn. I like the name because it follows the naming conventions in the community. Is there a backwards-compatible way to do this?

v0lta commented 8 months ago

Let's not do this for now. We can revisit the issue when more ml related material becomes available.