Open notmatthancock opened 10 years ago
Thanks for your report.
I am currently pretty busy and traveling next week, but I will have a look once I'm freer.
Luis
The correct way to do it with the current code is to pass a border
argument to wavelet_center
& wavelet_decenter
. The problem with the current implementation is that the border defaults to zero. Your code seems reasonable, but it's incorrect.
My current thinking is to require an explicit border
argument without a default.
*
Here's the code that gives better results:
for d in range(1,11):
for n in range(1,513,2):
R = rand(n,n)
RC = mh.wavelet_center( R, border=(24) )
RD = mh.daubechies( RC, 'D%d'%(d*2) )
RI = mh.idaubechies( RD, 'D%d'%(d*2) )
RR = mh.wavelet_decenter( RI, R.shape, border=24)
stats[n-1,d-1] = mean( abs(R-RR) )
del R, RD, RI, RC, RR
Thanks for your reply. So if a border must be manually set, what is the advantage of wavelet_center over numpy's pad function?
wavelet_center
rounds up to the next power of two after adding the padding.
Is there some theory that suggests the amount of zero-padding required for perfect inversion based on signal and filter sizes? I guess what I'm saying is that if you have to fiddle with a border size parameter ( where did 24 come from? ) to get correct results, the additional step that wavelet_center
provides may as well be done manually as well.
The filter size determines that. I couldn't remember off the top of my head the size of Daubechies 20, so I rounded up to 24.
I think what this discussion is showing is that wavelet_center
& wavelet_decenter
should take as a parameter the ID of the transform which will be computed on the image.
Sounds like a fix. IMHO, it would be ideal if these methods were abstracted away somehow so that one would work only in terms of transformations and inverse transformations.
Yes, you're probably right.
I'm not sure if the daubechies transforms are working correctly. I was under the impression that while there are issues for images of size not corresponding to a power of 2, the wavelet_center function serves to pad zeros to reach a power of 2. It is also my understanding that performing a transform / inversion with no operations in between should yield (near) perfect reconstruction of the image. I was not experiencing this apart from the example with luispedro image.
So I decided to run some tests by creating random images of size 1, ..., 512 and testing the transform/inversion for each daubechies type. I used a pixel-wise mean absolute difference for each image size and wavelet type to measure the reconstruction performance. In particular, I ran the following short bit of code:
Plotting stats is telling:
So what's going on here? I fiddled with the wavelet_center and determined that it "rounds" only to the next highest power of 2. So things like 509, 510, 511 go to 512.
Now so if I run something like (where R is a random 510 x 510 matrix):
whereas:
So it looks like the wavelet_center function should pad relative to the current image size, not just round up. I'm not sure about odd size either.
For sanity, I ran the same earlier test with pywt wavelets library (which was MUCH slower being pure python). This yielded satisfactory reconstruction error across the boards (notice the scaling factor on the y-axis):