Closed edwardyapp closed 2 years ago
Yeah, the source can be a bit difficult to follow. Basically, in _dwt.pyx
you will see calls to functions like double_dec_a
, float_dec_a
, etc. These C functions get generated via a C macro-based templating scheme (see templating.h) such that _dec_a
in wt.template.c generates both double_dec_a
and float_dec_a
. The _dec_a
code in turn calls templated convolution code from convolution.template.c). Those wavelet functions in turn call other templated code for downsampled or upsampled convolution.
The actual convolution code looks complicated mainly due to all of the different boundary extension conditions. Each boundary condition needs to cover even corner cases such as where the wavelet filter may be longer than the data size.
Thank you so much for your reply. It's a very comprehensively written piece of code. I managed to figure out why my hand calculated values were different from the PyWavelets code: the answer lies in the scaling factor used, i.e., 0.5 which I had used in my calculations vs 0.5^0.5 which apparently preserves the L2-norm of the signal under the wavelet transform.
preserves the L2-norm of the signal under the wavelet transform.
yes, sometimes textbook excamples don't use the same normalization on the coefficients (or may have a difference in whether the odd or even samples are retained after downsampling). I'm glad you were able to sort out the behavior here via the code
I am trying to hand calculate the result of a single DWT with a Haar transformation and comparing it with the result from this package. I managed to get down as far as the dwt_single function in https://github.com/PyWavelets/pywt/blob/1fb6835132b84b507e7459b2ec15ee5d1c92a1b8/pywt/_extensions/_dwt.pyx, but I cannot quite figure out where the calculation is actually done. Would someone be able to point me in the right direction? Thank you.