Closed msimet closed 11 years ago
To put this into the context of work being done by e.g. @tomaszkacprzak on #343 and #355: Tomek's scripts are focusing on the issue of what can we learn when we know the ground truth using simulated galaxies (photon-shooting vs. DFT methods of image rendering, and reconvolution validation). Melanie's work on this branch will use the real unknowns, i.e., the HST images, but apply transformations for which the changes from the original should be well-understood. By doing this with a range of interpolants and values for the "magic numbers", we should hopefully learn something useful about our ability to manipulate real galaxy images accurately.
@msimet , I forgot to ask: are you going to also do item (2) from that comment? It might facilitate the rest of what you're doing, since you will need easy access to the original image and PSF anyway.
I was planning on it at some point, yes--just hadn't decided whether to do it before or after getting started on some of these tests, but you're right that it would be useful.
Hi Melanie - Barney and I were discussing some of the critical tasks for GalSim validation that we need for the challenge, and this is one of them. We had originally thought to try to complete these in April, though there isn't much of April left at this point. :) Do you see any major obstacle to progress on this issue in the near future? We just wanted to check in about how everything's going. Thanks again for volunteering!
I think I've got everything I need, but I'll comment here if I get stuck somewhere. Will aim to have results up early next week.
Cool, thanks!
Hi folks, I'm having a bit of trouble with a change I made to the RealGalaxy innards and I was wondering if I could bug some of you for help.
In short, as requested in branch #168, I've changed the RealGalaxy original_image and original_PSF to be InterpolatedImage
s instead of SBInterpolatedImage
s so they are friendlier to use. This necessitates changing the SBConvolve
and SBDeconvolve
that were used to construct the actual RealGalaxy into Convolve
and Deconvolve
(since the original PSF that's being deconvolved is no longer an SBInterpolatedImage
). As an example of the trouble, I've stuck two scripts in examples/ on this branch: minimal_real_InterpolatedImage.py (which will run on this branch as-is) and minimal_real_SBInterpolatedImage.py (which will run on master). They pull the same galaxy 1000 times from the catalog real_galaxy_catalog_example.fits, measure its flux and print it, and write both the RealGalaxy
and the RealGalaxy.original_image
to FITS files in the examples/output directory. minimal_real_SBInterpolatedImage.py returns the same flux 1000 times to ~10 sig figs or so; minimal_real_InterpolatedImage.py has fluxes that vary at percent level, at least until it crashes from having thrown a NaN somewhere in its innards (which it usually does a few hundred galaxies in).
There are ways around this, of course--for one we could have an internal original_PSF that's an SBInterpolatedImage, and just use that for the deconvolve--but I wanted to bring this up in case 1) it's a stupid thing I'm doing with an easy fix or 2) it's a problem with the Convolve or Deconvolve operations.
I think a lot of the point of this issue was to remove the duplicated code about dealing with pad_image and noise_pad and such that is almost identical in InterpolatedImage and RealGalaxy. Currently, you are doing the complicated stuff to get pad_image set up correctly for SBInterpolatedImage, but then you pass that result to InterpolatedImage, which does it all again. I think you want to try to remove most of that code from RealGalaxy and just pass the correct values of pad_factor and pad_image to InterpolatedImage and let it do all those calculations.
That said, it looks like you have discovered a bug somewhere, so we should probably also try to track that down. When I try your script, I get a segmentation fault. So probably there is some error in the memory access in how you construct the SBInterpolatedImage with the doubled up calculations relating to pad_image. I'll try to figure out what is going on here so we raise an exception rather than get a seg fault or (on your machine) wrong values getting pulled into the image.
Ah, all right--I was more concerned with cleaning up the interface at the user end, not the developer end, so I will look into that. Thanks.
Mine seg faults as well, eventually--I just get incorrect values first...
Okay--this now works without errors, at least in the no-noise case. I will note that setting use_cache=False
didn't fix the problem before the recent commits, so it is not a direct problem with the caching, at least. There were still some weird flux variations as of commit 563e363, but I'm not sure if that's the same issue (I'd forgotten to remove the bit where it creates a pad_image
though I had removed all the noise_pad
ding options).
I tracked down the source of the problem for those seg faults you were getting. It had to do with the padded size calculated in getPaddedSize for the python layer being different from what SBInterpolatedImage expected, so it was trying to read and write off the edge of the image. Sometimes getting wrong values. Sometimes seg faulting.
When I went to write the unit test to duplicate the error (thence to fix it), I discovered that we didn't have any unit tests for the pad_image
parameter!
So I added a new set of unit tests for that, which originally failed for a number of reasons -- not just the seg fault, but there were other problems. So I fixed those and along the way cleaned up a lot of the code involving pad_image in the (python layer) InterpolatedImage constructor. The logic about the different padding cases is quite a bit simpler now.
Oh, neat, thanks! That looks like...much more work than I expected this to be when I started seeing the segfaults....
I'll go through and re-run my tests with the new logic and fixes.
Not all of it was directly related to the seg faults. I also fixed some other minor bugs that turned up. Plus a fair bit of it was me just simplifying some parts of the code, rather than actual bug fixing.
Goodness. As for the pad image stuff: mea culpa, and thanks for the fix, Mike (as usual).
Hello all,
I have some updates and plots on this issue. The short version is that, with the exception of the nearest & linear interpolants, there isn't much of a difference between the different x-space interpolants, and none at all in the k-space interpolants; which one wins depends on the exact test, but they're always within each other's statistical error bars, so I'm not sure there's a clear answer as to which one we should use. However, if I fit a line to the residuals as a function of the expected shear or size, there's a nonzero slope to that line for both shear components and for the size (generally of order 10^-3)--sometimes when the applied shear or magnification was to some other component. There are also residuals with rotation angle. (At the moment, the sinc isn't included in these plots, as that part of the test is still running.)
The differences with pad_factor and k-space interpolant are so small--below the numerical precision of the script outputs, in fact--that I'm wondering if they're actually being passed properly through all the guts of the program, but I haven't investigated that yet. The other caveat is that FindAdaptiveMom() can't always measure these objects successfully--it fails for around 5% of objects at least some of the time.
A quick reminder of the procedure:
FindAdaptiveMom()
In all cases, the offsets of those lines were zero (|c|<1E-6), so in the following plots I will show only show the slope m versus pad_factor.
Starting with the changes that happen to each component as we alter them.
The slope of the delta g1 vs g1 line when applying a g1 shear, X interpolants. Sub-percent level, but definitely not zero.
The slope of the delta g2 vs g2 line when applying a g2 shear, X interpolants:
The slope of the fractional delta size vs size line when applying a magnification, X interpolants:
The slope of the delta g1 vs g1 line when applying a g1 shear, K interpolants:
The delta g2 vs g2 and delta fractional size vs size plots look just like that last one, with slightly different numbers.
Delta g2 vs g2: m = -0.0025 +/- 0.0007 Delta fractional size vs size: m = 0.0009 +/- 0.0012
Continued in next comment...
I realized I should have said in the last comment, these are all just statistical error bars for the fit parameters using the scatter in the ~100 data points from the ~100 galaxies tested.
Some of the time, there are also residuals from tweaks in other parameters. At the limit of 10^-4 or higher for the mean, if not the mean+stderr, there's: For k interpolants, a change in delta g2 when a g1 shear is applied, m=-0.001 +/- 0.001 For x interpolants, there's possibly an effect on g2 when a g1 shear is applied: and there's a bit of structure in the delta g2 plot when a magnification is applied, though I'm not sure it's worth worrying about: Note, there are definite effects for the nearest & linear x-interpolants for all mixtures of parameters, but I don't think we were likely to use those anyway!
Finally, if I add a 2-component shear and a magnification all at once, the effects look a bit larger. Harder to disentangle what's doing it, but I thought it was worth mentioning that doing all of these things seems to bias you more. However, these tests also included larger shears & magnifications than in any of the previous tests, so it may just be the stress on the method rather than some nefarious interaction between doing all the things at once. X-interpolants, delta g1 vs g1: delta g2 vs g2: delta fractional size vs size is okay, except for nearest & linear.
K-interpolants, delta g1 vs g1: delta g2 vs g2 looks the same (m=0.03+/-0.005). Delta fractional size is consistent with zero.
Continued in the next comment...
Finally, angle tests. The residuals are mostly clustered around zero, but they have some notable outliers as a function of angle. Here are plots showing the mean, standard deviation, and extrema of the residuals at each rotation angle. I'm showing only pad_factor=4 since, as I mentioned, there seems to be no change with pad_factor.
Summary: the standard deviation tends to be about percent-level, and the mean is always close to zero. Looking at the mean, I doubt this is something to be worried about, but I thought these plots were a useful check, at least.
For X-interpolants, here's g1: g2: and size: The K-interpolants look pretty much the same, except that (again) all the interpolants look the same.
Things left to do that I can think of right now:
And forgot to say--the script to run the tests is devel/modules/test_interpolants.py
, if anyone would like to do something of their own.
A basic question - what's happening with the PSF (both removing the HST PSF and adding your own) during these tests?
Nothing at the moment: I don't deconvolve or reconvolve by any PSF, and the pixel scale is the same in all images. The script is set up to also deconvolve and then reconvolve by a space-like or ground-like PSF, though, in case we want to do those tests as well. On Jun 8, 2013 5:18 PM, "Gary Bernstein" notifications@github.com wrote:
A basic question - what's happening with the PSF (both removing the HST PSF and adding your own) during these tests?
— Reply to this email directly or view it on GitHubhttps://github.com/GalSim-developers/GalSim/issues/389#issuecomment-19155926 .
Never mind, I see exactly why the pad_factor and k-space interpolants don't matter: since this version doesn't include any deconvolution or reconvolution, we never enter the Fourier domain at all. My first instinct is to rerun these tests with a simple (small Gaussian?) PSF to test the effects of the Fourier-domain properties--anyone have any better ideas?
I think small Gaussian (as a proxy for delta function) is good... That's what some of the other unit tests use too:
delta = galsim.Gaussian(sigma = 1.e-8)
Let me clarify why Melanie was doing this initial test without convolution by any new PSF, since it's basically my fault. ;) I thought the initial, most basic test we could do for any interpolant was to treat the original HST images as just a bunch of surface brightness profiles that we want to distort in well-defined ways, and then see if the observed moments from FindAdaptiveMom change the way we expect. The value of this is that there are a lot of well-defined tests one can do: for example, if we introduce a sub-pixel shift then we know the centroids from the 1st moments should move in a certain way, but on average, none of the 2nd moments should change. If we shear (despite the silliness of shearing a PSF-convolved object) then we know the 1st moments should be fixed and the 2nd moments should change in a way that is, again, completely well-defined.
the disadvantage of beginning this way is that, as Melanie pointed out, we don't test the k interpolants. But I thought that providing some criteria for how well real-space interpolation can be done with GalSim still has significant value.
In the interest of having a pretty well-defined test that can include tests of k-interpolants, I would suggest using the small Gaussian that Reiko mentioned. The most obvious and straightforward way to do this would be to convolve with it and then apply applyShear etc. to the convolved object. In that case I believe the shearing (or whatever other operations) will happen in Fourier space - is that right?
In that case I believe the shearing (or whatever other operations) will happen in Fourier space - is that right?
Yes. (Although the order doesn't matter. It will happen in Fourier space as long as draw uses Fourier space.)
Okay, fine. I want the shearing to happen after the convolution because then it's still a straightforward test to interpret, i.e., comparing FindAdaptiveMom outputs for "original convolved with delta function" vs. "original convolved with delta function and then sheared/shifted/rotated/magnified" tells you about the accuracy of Fourier space operations like shearing and so on for InterpolatedImages.
Since you're doing some very sensitive tests, it might be worth checking the one thing that the AdaptiveMoments do not guarantee, which is that they handle sub-pixel structure properly. Even for a band-limited image, the adaptive moments might change a little bit when you move the object by a non-integer number of pixels. A basic test maybe worth doing is to see how much the 2nd moments change when you simply shift the image by fractional pixels (either with Fourier domain or with x-domain interpolation).
Okay, new results using Reiko's delta-function-esque Gaussian. In this case, I took the original Hubble images, didn't deconvolve them by anything, but convolved them by a Gaussian with sigma=1.E-8. The shapes are still measured with FindAdaptiveMom(); I tried hsm.EstimateShear() with an image of that PSF, but it threw NaN errors for every image. I don't think this is a problem, as the results for the x-interpolants are the same as they were in my previous comments (except for the nearest & linear interpolants, but those were badly behaved already). One other change: I've fixed the range of the y-axis on the following plots so that you can compare error bars from one to another, since the size of the error tells you something about the scatter in the relation.
One thing to notice on these tests is that higher-order/more complicated interpolants and larger pad_factors seem to make the answer converge, in the sense that all the combinations start giving the same answer, but they're not converging to zero.
So, redoing the plots above:
The slope of the delta g1 vs g1 line when applying a g1 shear, K interpolants.
The slope of the delta g2 vs g2 line when applying a g2 shear, K interpolants.
The slope of the fractional delta size vs size line when applying a magnification, K interpolants.
Here are the slopes for delta g1 vs g1 when applying a g2 shear...
...and magnification:
So limiting a change in a parameter to having an effect only on that particular parameter seems to depend a lot on pad_factor. That's good to know.
Less encouragingly, changes in g1 seem to imprint a change on g2, with a slope of ~1E-3:
For pad_factor>2, the slopes for delta g2 with an applied magnification, or delta fractional size with either component applied shear, are essentially zero.
Up next, angle tests!
The pad_factor doesn't seem to matter here, so I'll just show the pad_factor=6 results for the k-interpolants. The answer is essentially the same as before:
that is, rotations of around 45 degrees for g1 or 30 degrees for g2 have a few notable and asymmetric outliers, though the mean is still close to 0 and the standard deviation is of order 1E-3 or smaller. (I just checked, and those high outliers have a larger delta-g in g1 by around 0.01 than any of the applied shear tests I was showing above. So the difference here is not just that I'm showing you the outliers instead of the best-fit line. The outliers in g2 are of about the same order in the angle and applied-shear tests. More than anything, the difference is in the asymmetry of the angle plots; the high outliers and low outliers are approximately equal in the applied shear tests.)
The delta g2 vs angle plot for the k-interpolants is the same as the one for the x-interpolants. The delta size plot is the same too, but the scale is a little better, so I'll show that...
...and note the weird feature that while the extrema and the mean don't vary at all with angle, the first quartile does. Quite bizarre. (Also, I didn't think about this before, but--eesh, the extrema here are things where the size is half again as large as it's supposed to be?!)
Next: shifts!
I ran some sub-pixel shifts (|r|=0.5 pix, every 45 degrees starting from the x-axis). I plotted the shifts at 90_n and 45+90_n on different plots just to see if it made a difference (it doesn't), but the short version is: hey, look, hardly any bias!
I'll just show the 90_n degrees plots, as they are indistinguishable from the 45+90_n cases.
For k-interpolants, slopes of delta g1 vs g1, shifts at 90*n degrees: Very close to 0, and you can see how linear the change is by the size of the the error bars...
Slopes of delta g2 vs g2:
Slopes of delta fractional size vs size: Nothing at all, great.
The size plot looks the same for the x-interpolants as well, but the g1 and g2 plots are different. G1:
And G2:
So, things to note overall:
Melanie, thank you so much for this, I'm still somewhat parsing it all but it's an extremely thorough set of tests!
One thing, which I think Gary said in the past regarding these tests, is that really the x-interpolant we choose is just that - a choice. Within the obvious bounds imposed by good sense, it is to some extent up to us to express how we wish to assume the galaxy varies continuously beneath the discretely sampled lookup table. (The most natural choice, sinc function interpolation, is not really possible.) The differences you see in the x-interpolant results between Nearest, Linear and the others are I suspect due to the interplay of implicit assumptions that the HSM moments code makes and the nature of the sort of functions that are generated by these sorts of interpolation.
The non-zero m
biases, which converge across interpolants, are interesting though. I'm fairly sure we can rule out this being due to the use of the AdaptiveMom (can we? I'm going to run a quick test with a simple parametric profile, will report back, just to check...) so it points more at the sort of floor we've been seeing elsewhere EXCEPT that these are not Sersic galaxies, but InterpolatedImage
objects. Hmmm.
(The most natural choice, sinc function interpolation, is not really possible.)
As a quick example, to run all of these tests for each interpolant takes approximately 1/2 hour of computational time...except for the sinc, which has been running for 546 hours (!).
Just noticed that my last plot is not the plot I intended to post! Here's the slope of the delta g2 vs g2 line for X-interpolants, using sub-pixel shifts:
Hi Melanie,
As you can see from the flurry of commits above, I have been making progress on trying to get a complementary suite of tests up using InterpolatedImage
instances created by drawing Sersic profiles rather than using RealGalaxy
classes. We should see similar behaviour.
I've been trying to ape the behaviour of test_interpolants.py
in a new script module, test_interpolants_parametric.py
, but I've now reached a point where I've realised I'm not sure what's going on.
In test_interpolants.py
you create a set of reference values of g1, g2 and sigma by directly measuring them in the real_galaxy.original_image
, i.e. without a PSF. These are then combined with applied shears, magns. etc. to calculate "expected" shears for comparison with observed g1
, g2
etc.
I think I understand the rationale behind this in the 'original'
and 'delta'
tests, but not for the 'ground'
or 'space'
PSF tests (which admittedly you have not plotted yet) where it seems you would be comparing unconvolved shape parameters to convolved ones. Perhaps I should just come and find you in person, but am I missing something obvious?? (actually, I'm going to come and find you in person now...)
(P.S. So much for a "quick test" btw, but I think it does make some sense to have a parallel, noise-free test of the InterpolatedImage
behaviour using parametric profiles, and Sersics based on COSMOS fits seem a good population to explore)
The ground and space options are left over from when I was doing initial tests to make sure I understood how everything worked--I've never actually used them to measure interpolant behavior, but I thought I'd leave them in just in case we wanted to investigate them later; you're right that we'd have to make some tweaks to the expected values in order to compensate. Don't worry about mimicking them right now.
(And sorry, these emails came in about 5 minutes after I left to run some errands! We can chat tomorrow if you like.)
I just went through and removed the ground- and space-like PSF stuff from test_interpolants.py, to prevent future confusion. (We can always retrieve the code if we decide we want them back.)
Thanks Melanie! I'm almost there with this then... (apart from run time)...
The plotting script should now be usable for you, by the way.
Hi Melanie, so I got this working on my laptop for a small number of test galaxies and things seem reasonable, but I'm going to need to run it on an external machine with more memory - the tests with image_type = original
seemed to eat my laptop's RAM and I need to use it :)
I do have a table of results for image_type=delta
though, and although the tests only ran for 9 galaxies (I specified 10 but one was out of the allowed sersic range) I thought it might be interesting just to see the properties of this run. However, the plots didn't seem to come out - weirdly only the x_interpolant angle tests worked, which I show below:
I think the failure for the other plots might be due to some nan
values in the catalog which we can probably tell the plotting script to ignore (usually these are due to the expected shear calculation using arctanh
on something with value -10 due to an HSM
failure). Here were the error messages:
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/matplotlib/axes.py:4442: UserWarning: No labeled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labeled objects found. "
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/numpy/core/fromnumeric.py:2374: RuntimeWarning: invalid value encountered in double_scalars
return mean(axis, dtype, out)
plot_test_interpolants.py:29: RuntimeWarning: invalid value encountered in double_scalars
sigsqaprime = sigsq/n
plot_test_interpolants.py:30: RuntimeWarning: invalid value encountered in double_scalars
sigsqb = sigsq/numpy.sum((x-xavg)**2)
Ahhh... on a detailed look at my output file, interpolant_test_parametric_output_delta.dat
, I think I see the problem, it's a bug in my code somewhere that's not outputting the right values for the padding column or k interpolant column. Sigh!
I ought to warn you though: my fiancee Annie is arriving in town midday tomorrow so I will be away from the center all day I imagine, and will have only sporadic email after ~11am, apologies for that! On the positive side, I suspect that there are only a couple of things to iron out, this feels close. Apologies for the delay, we should have answers soon.
Caught the bug, should work now, will re-run overnight to get some prelim results for the morning...!
Excellent, looking forward to it. I should also have results for a larger sample of galaxies by then (as I'm concerned about noise effects in the ~100 galaxy sample).
So, as I mentioned sidelong on #443, I'm having a memory issue here. I've found that my tests are blowing up in memory once the k_interpolant
I use comes from the Lanczos
family. I've localized this to the use of draw()
for my "delta function" convolved (i.e. convolved by a galsim.Gaussian(1.e-8)
) InterpolatedImage
instances. Here's an example from an ipython session.
First of all I draw a COSMOS-like Sersic into an image, im
:
In [1]: import galsim
In [2]: sersic = galsim.Sersic(n=3.1, half_light_radius=0.6)
In [3]: im = galsim.ImageD(512, 512)
In [4]: sersic.draw(im, dx=0.03)
Out[4]: <galsim._galsim.ImageD at 0x10c4ac998>
This is then used to form the basis of a new InterpolatedImage
instance, using a Lanczos(3)
interpolator in k space:
In [5]: interpolated = galsim.InterpolatedImage(im, dx=0.03, k_interpolant=galsim.Lanczos(3))
As in mine and Melanie's tests, we then convolve with a tiny Gaussian
to activate the k-space interpolation:
In [6]: interpolated_convolved = galsim.Convolve([interpolated, galsim.Gaussian(1.e-8)])
At this stage, my ipython
session is using about 150MB of RAM. Now I'm going to start drawing interpolated_convolved
into a new image:
In [7]: outimage = galsim.ImageD(512, 512)
In [8]: interpolated_convolved.draw(outimage, dx=0.03)
Out[8]: <galsim._galsim.ImageD at 0x10c4acd60>
RAM usage: 413MB...
In [9]: interpolated_convolved.draw(outimage, dx=0.03)
Out[9]: <galsim._galsim.ImageD at 0x10c4acd60>
In [10]: interpolated_convolved.draw(outimage, dx=0.03)
Out[10]: <galsim._galsim.ImageD at 0x10c4acd60>
671MB, 928.MB...
In [11]: interpolated_convolved.draw(outimage, dx=0.03)
Out[11]: <galsim._galsim.ImageD at 0x10c4acd60>
In [12]: interpolated_convolved.draw(outimage, dx=0.03)
Out[12]: <galsim._galsim.ImageD at 0x10c4acd60>
In [13]: interpolated_convolved.draw(outimage, dx=0.03)
Out[13]: <galsim._galsim.ImageD at 0x10c4acd60>
1.16GB, 1.41GB, 1.66GB, and so on.
I can't get MEM_TEST to work on my system here, so I'm going to put together a short script containing the above which @rmandelb has kindly offered to run :)
Well, I was really quite intrigued by this, and you already had the code in your post, so I just copied it into a file to run myself rather than waiting for a script.
When I ran this with MEM_TEST (using the version of GalSim on this branch) the log didn't indicate any real memory leaks. And then I realized that it wasn't behaving the way you reported - it wasn't eating up RAM. I modified the script to do that interpolated_convolved.draw()
command many more times in a row, and there was no increase in the RAM with each successive call. So honestly I'm kind of confused.
Another thought: could the same compiler issue that prevents you from compiling with MEM_TEST=True be related to this weird behavior? Also, have you checked on some other machine that you see the same behavior with this script?
I can't reproduce this behavior either. In the meantime, is the testing script up to date? If so, I could run it on the same machine I ran the RealGalaxy interpolant tests on...
Hi all,
Sorry for the delay producing the test script (pushed above) that meant you went and reproduced (or not) my memory issue. It's quite interesting that you don't see the memory issue, and as you Rachel my first thought is that this is unlikely to be a mere coincidence with the fact that my fink gcc 4.7.3 install didn't let me compile with MEM_TEST=True. I will test the script above on other machines available, and fink also now provides a gcc 4.8.1 so I'll try that.
@msimet Aha, that would be great! I'll get it ready for you to do that tomorrow morning, thank you!
To summarize my compiler misadventures, I've reproduced the module docstring from my leak test script below. In short, /usr/bin/g++
(4.2) works without leaks but all of the gcc
compilers available via the fink project have the same problem, and none of them allow scons MEM_TEST=True
to compile.
I suppose this means a bug report ought to be made to fink/gcc, but I would be interested to know what compilers other people were using when not reproducing the leak. In other words - I still need to find out if this is a gcc 4.6+ issue, or a fink issue. The latter seems more likely, but I don't have enough info to do that myself yet. Unfortunately, I'm kind of worried that I need to install gcc 4.6+ elsewhere myself, which takes quite a bit of work and there are plenty of other issues demanding time.
Here's the docstring in the memory test script in devel/external/memtest/test_leak_Issue389.py
:
This is short module designed to test a memory leak that Barney (barnaby.t.p.rowe@gmail.com) was
finding with tests on GalSim Issue #389. This failure seems to be localized to the use of
fink-provided gcc, specifically gcc 4.6.4, 4.7.3, and 4.8.1. Using these compilers, the `for` loop
at the end of this script causes memory to be eaten up rapidly on a MacBook Pro running OSX 10.7.5.
This issue also only seems to affect the use of `k_interpolant=galsim.Lanczos(n)`.
Interestingly, for these fink gcc compilers the `scons` option `MEM_TEST=True` also produces errors
that indicate some problem in the installation:
/sw/bin/g++-fsf-4.6 -o src/.obj/BinomFact.os -c -O2 -fno-strict-aliasing -Wall -Werror -fPIC -DMEM_TEST -Iinclude/galsim -Iinclude -I/Users/browe/local/include -I/sw/include src/BinomFact.cpp
In file included from /sw/lib/gcc4.6/lib/gcc/x86_64-apple-darwin11.4.2/4.6.4/../../../../include/c++/4.6.4/vector:63:0,
from src/BinomFact.cpp:24:
/sw/lib/gcc4.6/lib/gcc/x86_64-apple-darwin11.4.2/4.6.4/../../../../include/c++/4.6.4/bits/stl_construct.h: In function 'void std::_Construct(_T1*, const _T2&)':
/sw/lib/gcc4.6/lib/gcc/x86_64-apple-darwin11.4.2/4.6.4/../../../../include/c++/4.6.4/bits/stl_construct.h:84:9: error: expected id-expression before '(' token
scons: *** [src/.obj/BinomFact.os] Error 1
scons: building terminated because of errors.
So far, the workaround is to move to using the system `/usr/bin/g++` (from gcc 4.2), which works
without problems. But this is worth further investigation and maybe a bug report to fink or gcc.
I suggest that when we want to merge this branch I add something to the FAQ about this issue, and perhaps point to this script for anyone wanting to test the problem on their own setup.
In other news, @msimet I'm just testing the updated test_interpolants_parametric.py
script one last time for a small number of input galaxies, to check there are no more SNAFUs now the memory issue is fixed by changing compiler. I'm about 1/3 of the way through, I'll let you know after lunch when you can pull the most recent script and set it running externally. Thank you!
I still need to find out if this is a gcc 4.6+ issue, or a fink issue.
Unfortunately my machines are gcc 4.1, 4.2, and 4.4 (for the three where I have GalSim installed). That's not very helpful. According to our code build matrix, @rmjarvis and @joezuntz have machines with gcc 4.6, so perhaps they'd be willing to be guinea pigs for this test?
This is continuing some work from Issue #168 in testing what happens to real galaxies when they are modified in various ways by GalSim. Going off @rmandelb's summarizing comment on that issue, I will start with testing 1a and 1b: basically, take a set of HST galaxies or simulated versions of those galaxies, try a bunch of different interpolation kernels/padding, see what happens to the moments when those objects are sheared/magnified/shifted/rotated. I am opening this issue so I have a branch to store testing scripts and other ephemera while I work.