Open mys007 opened 9 years ago
It is documented no? https://github.com/torch/image#image.gaussian
Well, the function is documented but the crucial information what it actually computes is missing. But more importantly, what is actually the motivation for evaluating Gaussian over [-0.5, 0.5] with 1/size
steps, instead of evaluating it over [-size/2, size/2] with step 1? The former definition lacks sense especially for larger sigma
, in my opinion.
@jonathantompson the gaussian guy, please look at this issue
I'm not really sure what to suggest. I didn't write the original Gaussian kernel... I may have at one point posted a PR for a C version, but it was a long time ago. I think at the time I made the API the same as it was to avoid breaking code.
@mys007 The problem here is that as soon as you change the API behaviour you're going to break a lot of other people's code. I completely agree that the current Gaussian is non-standard but I'm not sure what we can do about it at this point. If you think the docs are lacking (Gaussian was written during the dark ages when the torch illuminati were less concerned about keeping the docs up to date), then you should do a PR to fix it.
On Fri, Feb 27, 2015 at 6:06 AM, Soumith Chintala notifications@github.com wrote:
@jonathantompson https://github.com/jonathantompson the gaussian guy, please look at this issue
— Reply to this email directly or view it on GitHub https://github.com/torch/image/issues/51#issuecomment-76377391.
Thanks for your feedback. I agree with the point of breaking old code. But one could add the standard behaviour as a non-default option to both gaussian1D()
and gaussian()
:
{arg='adapt_sampling', type='number', help='adapts sampling step to evaluate over a unit-sized domain (legacy behaviour)', default=true}
Further, the question then is whether to fix internal calls in the image rock too (e.g. in gaussianpyramid
, where additionally the sigma should adapt to the scale difference?). Also, it might be good to document that scale()
does not pre-smooth on its own when downsampling.
Sounds OK to me. I'll let @soumith (aka: torch illuminati member) make the final decision.
On Fri, Feb 27, 2015 at 12:29 PM, mys007 notifications@github.com wrote:
Thanks for your feedback. I agree with the point of breaking old code. But one could add the standard behaviour as a non-default option to both gaussian1D() and gaussian():
{arg='adapt_sampling', type='number', help='adapts sampling step to evaluate over a unit-sized domain (legacy behaviour)', default=true}
Further, the question then is whether to fix internal calls in the image rock too (e.g. in gaussianpyramid, where additionally the sigma should adapt to the scale difference?). Also, it might be good to document that scale() does not pre-smooth on its own when downsampling.
— Reply to this email directly or view it on GitHub https://github.com/torch/image/issues/51#issuecomment-76435300.
sounds okay to me, if @mys007 posts a PR, happy to merge.
Is the new suggested Gaussian method implemented?
I forgot about this topic:). I partially implemented it in my private code, I will try to merge it and create a PR soon.
The current implementation of
image.gaussian()
evaluates the Gaussian over a fixed interval (approx. [-0.5,0.5]^2 in the standard settings) and changes the sampling density by thesize
. This is not documented and highly non-standard. I would expect the same results as given e.g. by MATLAB'sfspecial()
function, i.e. the filter size should change the evaluation domain.