Closed cthoyt closed 9 months ago
This is a nice idea!
But, for the moment wavelet learning is implemented with soft constraints, which means that the optimizer can deviate significantly from proper wavelets. If not monitored properly this can lead to very strange results. We should have warnings in the documentation to make sure people understand, that we are in a less established territory. Where things might change in the future.
I have given this more thought and think we should not do this with the current layer, the reason being that the approach is inspired by https://openaccess.thecvf.com/content_iccv_2015/papers/Yang_Deep_Fried_Convnets_ICCV_2015_paper.pdf and the layer is compressed, that is it works in the two fully-connected classifier setting that was popular until resnets got rid of these two layers completely. I am not sure the layer formulation we have there is relevant for users today.
It does, however, make sense as an example, where for people to learn about one way an adaptive wavelet layer could work. I don't think this specific layout applies to modern networks, so I don't think packaging is necessarily beneficial.
We could, however, call the learnable_code
module nn
. I like the name because it follows the naming conventions in the community. Is there a backwards-compatible way to do this?
Let's not do this for now. We can revisit the issue when more ml related material becomes available.
this example is great: https://github.com/v0lta/PyTorch-Wavelet-Toolbox/blob/main/examples/network_compression/wavelet_linear.py. Since it corresponds to a published architecture, why not incorporate it directly into the toolbox?