Closed JohnLunzer closed 6 years ago
The DWT is basically repeated application of convolution with a scaling or wavelet filter followed by decimation (downsampling) by two. Here is a filterbank diagram from wikipedia illustrating this:
After a certain number of levels, the discrete signal has been decimated as far as possible (i.e. the decimated signal becomes shorter than the filter length).
Here is the actual C code for computing the maximum number of DWT levels: https://github.com/PyWavelets/pywt/blob/dd7b79332f6afac345be8dae3e6dffe2edb5facc/pywt/_extensions/c/common.c#L89-L94
@grlee77, thank you for documenting it up to that point. I had made it that far in my investigation.
I guess I was more interested in why this statement is true:
After a certain number of levels, the discrete signal has been decimated as far as possible (i.e. the decimated signal becomes shorter than the filter length).
If I do the DWT steps manually or manually do the convolution/downsampling and use periodization for signal extension what is it about the signal being shorter than the filter length that makes the result no longer valid/useful/correct?
I do not know of a quick tutorial/source showing this that I could point you to, but can give a basic example here:
consider a singal of length 8. The length after each of 3 levels of decomposition with a Haar wavelet and edge mode periodization will then be 4:2:1. Once you reach a length one signal, convolution with the filter is just a scaled version of the filter and is not providing any new info about the signal. Clearly, you also could not decimate a length one signal further.
import pywt
import numpy as np
a = np.arange(8)
pywt.wavedecn(a, wavelet='haar', mode='periodization')
[array([ 9.89949494]),
{'d': array([-5.65685425])},
{'d': array([-2., -2.])},
{'d': array([-0.70710678, -0.70710678, -0.70710678, -0.70710678])}]
In practice for other signal lengths and edge modes the final coefficient size will not end up exactly one. However the same type of reasoning applies: once the signal is shorter than the filter there is no point in repeating convolutions as at this point you are just getting coefficients based on the extension of the edges via whatever extension mode
was selected.
@grlee77 , I've performed a similar experiment using the bior2.2
wavelet (which has a low-pass filter length of 5) and doing each level of decomposition manually. I found it useful to print out the low-pass results at each level of decomposition. I also scaled the low-pass results so that the DC gain was unity so each scaled version of the signal matched the previous decomposition level. Doing this I was able to watch the signal scale down in ways you might expect, until my decomposed low-pass signal went below the filter length. Then the coefficients began to poorly represent the original signal. Might be a flaw in there somewhere, I'm sort of poking in the dark at this point.
With my bior2.2
example I also diagrammed out having a signals of len 4,5,6. Only when my signal length is greater than the filter size (signal length 6) do the filtered first and last samples of the signal no longer include the same sample(s) from the middle of the signal included in the filter. Again, poking in the dark.
In both your example and mine, I still don't think we've answered the "why". My intuition tells me that there is some sampling theorem involved or the word "Nyquist" should appear at some point.
Sorry to beat this point dead. @grlee77, I appreciate your efforts thus far.
This issue tracker is intended for reporting bugs (or requesting specific features) in PyWavelets, not for general support/discussion. I think this sort of general question is more suited to something like http://dsp.stackexchange.com/ or http://math.stackexchange.com. You might also try the google group: https://groups.google.com/d/forum/pywavelets, although it is not very highly active.
Certainly the documentation of the PyWavelets package could always be improved and any contributions toward that would be welcomed.
@grlee77, this could be general question, but in this case it is based specifically on dwt_max_levels
which does not elaborate on what a "useful" level is and gives no further explanation on why the algorithm is valid. PyWavelets' coding guidelines in CONTRIBUTING.md
puts a lot of emphasis on clear and complete documentation. Based on that guideline the almost nonexistent documentation (one sentence) related to a fairly important function could easily be considered a bug.
Additionally PyWavelets' CONTRIBUTING.md
(which github points to when starting a new issue) makes no mention that technical discussions are frowned up or not welcome. I see several issues in the tracker with the tags enhancement
and documentation
. Both of which often involve technical discussion, sometimes involved.
Suddenly shutting down this discussion based on the reason you gave doesn't seem valid. I would either change you guidelines to say technical discussions should be avoided, or be more careful about shutting people down in the future. I think the first option would not reflect well on the project.
I think it is okay for you to tell me that you're not sure enough (math is hard) or not willing (due to limited bandwidth) to give a detailed explanation. If that is the case I will ask my question on stack exchange. If I find the answer I will gladly report it back here, because I use pywt all the time and want others to benefit from improvements.
Fair enough. I apologize that my response came across as more adversarial than intended. I did not mean to discourage discussion, but just thought that it may be better suited to the mailing list than an issue tracker. I don't personally have the bandwidth to spend more time on it today and there may be a better chance of getting a quicker response on stack exchange or the mailing list where there are potential readers.
Note that I have not closed the issue as I agree that the documentation here (as well as many other places) could stand to be improved.
despite my previous statement RE: bandwidth I was still thinking about this...
FWIW, here is Matlab's documentation for the equivalent routine, although they don't explain the rationale there either: https://www.mathworks.com/help/wavelet/ref/wmaxlev.html
I think it is basically just that the decomposition stops whenever the signal would become smaller than the filter as in that case there would be no coefficients at the next stage that are not "corrupted" by edge effects.
In looking at this I did find that PyWavelets forces the user to not exceed this maximum level when calling wavedec
whereas Matlab does allow running wavedec
for any arbitrary number of levels and that waverec
can still reconstruct the original signal in those cases. In the 'Haar' example above this means the single coefficent gets duplicated to size 2 by padding and then the 2 element array is filtered again. The detail coefficients for that example are always a single floating point 0.0 for all levels beyond the "maximum" number of levels. So, although it is possible to keep iterating the filterbank down to an arbitrary number of levels due to the padding that occurs, I don't think there is any practical utility to doing so.
What do you think about something like the following for the documentation for dwt_max_levels
? (perhaps in the Notes
section of the docstring).
This routine returns the maximum number of levels of wavelet decomposition of a discrete signal. The rational for the choice of levels is the maximum level where at least one coefficient in the output is uncorrupted by edge effects. Put another way, decomposition stops when the signal becomes shorter than the FIR filter length for a given wavelet
It might also be worth explicitly stating that this correspons to floor(log2(signal_length / (filter_length - 1)))
Please suggest improvements or even make a PR if you are so inclined! We would love to have additional contributors to the project!
I like it. I think it nicely summarizes the fruits of the technical discussion.
Rather than saying edge effects I would change the second sentence to (altered portion in italics):
The rational for the choice of levels is the maximum level where at least one coefficient in the output is uncorrupted by edge effects caused by signal extension.
And yes, I think adding the formula to the docstring would be nice. I originally had to look through the source to find it.
I would say it's "good enough" without really digging into sampling theory and what not. If I get a more satisfactory answer elsewhere I will bring it back here.
Thank you for taking the time today.
The description of the pywt.dwt_max_levels function seems ambiguous to me.
I've tried to do some digging/reading (a couple hours worth). I've found a couple other references to a similar algorithm used by pywt, also without explanation. What happens at decomposition levels lower than this that makes them not useful?
I would appreciate it if somebody could explain to me what maximum useful level means. I'm feeling pretty defeated that I couldn't dig something up. A little bit of theory would be extra appreciated, to get a solid understanding.
Thanks in advance.