Open RamiMatar opened 1 year ago
Hello Rami Matar,
Thanks a lot for taking time to look at this work.
Nice catch ! this is due to the aliasing effect that after application of the HP filter + subsampling that makes the frequency order decreasing. For example without this change at level two the node order become AA-AD-DD-DA instead of AA-AD-DA-DD. Here is a paper that discuss this effect https://ieeexplore.ieee.org/abstract/document/847906 [https://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png]https://ieeexplore.ieee.org/abstract/document/847906 Wavelet packet feature extraction for vibration monitoringhttps://ieeexplore.ieee.org/abstract/document/847906 ieeexplore.ieee.org
If you have any other questions do not hesitate to ask!
Best regards, Gaëtan
De : RamiMatar @.> Envoyé : lundi 3 juillet 2023 07:32 À : FrusqueGaetan/Learnable-Wavelet-Transform @.> Cc : FGaetan @.>; Mention @.> Objet : [FrusqueGaetan/Learnable-Wavelet-Transform] Perfect reconstruction + alternating filter positions (Issue #1)
Hi @FrusqueGaetanhttps://github.com/FrusqueGaetan , thank you for the excellent work on the papers and for sharing this code! I had two questions I hope you can help me with:
Lines 221-234 https://github.com/FrusqueGaetan/Learnable-Wavelet-Transform/blob/7b9982da87ba8c4978dd217edb1e23bfa3994217/Code/NeuralDWAV.py#L221 Why do you have the low pass and high pass order switched for even and odd positions? Is there some other detail in the implementation where that's important?
When initialized, shouldn't the LWPT be able to perfectly reconstruct signals based on the filter properties which are selected? From the paper, my understanding was that we can't guarantee perfect reconstruction after learning due to the required kernel property being likely not conserved after the kernel is updated with backpropagation. When I try to do a simple example with input of [1,2,..., 16], and 2 levels of the LWPT before training, however, this is the reconstruction
lwpt = NeuralDWAV(2 ** 4, 2) x = torch.tensor([[range(1,17)]], dtype = torch.double) y = lwpt(x)
— Reply to this email directly, view it on GitHubhttps://github.com/FrusqueGaetan/Learnable-Wavelet-Transform/issues/1, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMFY3YWJV7PNDLSEURO3BRLXOJKPLANCNFSM6AAAAAAZ36IBIM. You are receiving this because you were mentioned.Message ID: @.***>
Hi @FrusqueGaetan , thank you for the excellent work on the papers and for sharing this code! I had two questions I hope you can help me with:
Lines 221-234 Why do you have the low pass and high pass order switched for even and odd positions? Is there some other detail in the implementation where that's important?
When initialized, shouldn't the LWPT be able to perfectly reconstruct signals based on the filter properties which are selected? From the paper, my understanding was that we can't guarantee perfect reconstruction after learning due to the required kernel property being likely not conserved after the kernel is updated with backpropagation. When I try to do a simple example with input of [1,2,..., 16], and 2 levels of the LWPT before training, however, this is the reconstruction