In the current initial implementation of the wavelet transform, we interpret the output simply as an array in an Fn space, either real or complex. While being simple, this solution has some remarkable drawbacks:
It does not properly represent a discretization of the continuous wavelet transform, at least the space structure is missing on the range side
The hierarchical structure of the wavelet coefficients is not mirrored. This makes it hard to access a subset of the coefficients corresponding to a certain detail level at a time without helper functions.
A visualization (=plot in this case) is totally meaningless in more than 1 dimension due to the flattening process. Even in 1D (is that supported at all??) it would be hard to tell where one set of coefficient stops and the next one starts.
Suggestion
Before doing anything advanced mathematically (although I'm pretty sure that the CWT is a mapping between adequate L^p spaces - for L^2 it is even totally obvious) I suggest that we at least make the range a ProductSpace, one component for each detail coefficient and one for the approximation coefficients. That way, one could display those arrays nicely with manual reshaping.
What we need to find out
Which function spaces are adequate to represent the CWT? Probably there is not one single answer to this question, but probably there will be some - at least for us - canonical choice(s).
In which way is the DWT as implemented now a discretization of the CWT? How does it approximate the integral of the CWT? My strong suspicion is that it represents an approximation of the integral evaluated for scaling parameter 2^(-n), where n is (roughly) the detail level and the translation parameter is sampled such that at each level, the same interval (1D) is sampled. We need to look into this.
pywavelets 0.5.0 adds a function to convert wavelet coefficients into and from a single array, however not linear as we do but as block hierarchy in an nD array. They also return/take as input a list of slices for those blocks.
This could very well replace our current storage scheme and would "outsource" two of our current helpers.
It also adds a continuous WT at least in 1D, not sure how useful that is, but it's part of this issue so I just mention it here.
In the current initial implementation of the wavelet transform, we interpret the output simply as an array in an
Fn
space, either real or complex. While being simple, this solution has some remarkable drawbacks:Suggestion
Before doing anything advanced mathematically (although I'm pretty sure that the CWT is a mapping between adequate L^p spaces - for L^2 it is even totally obvious) I suggest that we at least make the range a
ProductSpace
, one component for each detail coefficient and one for the approximation coefficients. That way, one could display those arrays nicely with manual reshaping.What we need to find out