GalSim-developers / GalSim

The modular galaxy image simulation toolkit. Documentation:
http://galsim-developers.github.io/GalSim/
Other
228 stars 107 forks source link

chromaticity #467

Closed jmeyers314 closed 10 years ago

jmeyers314 commented 11 years ago

Hi all, I'd like to add chromatic PSFs to GalSim. I think most of chromaticity can be captured rather simply by creating a series of monochromatic PSFs, dilating, shifting, and adjusting the fluxes of these, and then adding them up. The main issue with this is that if your PSF is the sum of 1000's of monochromatic PSFs, then GalSim starts to run slow. To combat this in my own code, I tend to draw the sum-of-monochromatic-PSFs into an InterpolatedImage and use that instead.

Here's some pseudocode that attempts to do all of this:

def chromatic_PSF(wave, photons, centering, sizing, fiducial_PSF, interpolate, PSF_kwargs):
    mpsfs = []
    for w, p in zip(wave, photons):
        monochrome_PSF = fiducial_PSF(flux=p, **PSF_kwargs)
        monochrome_PSF.applyDilation(sizing(w)) # handles seeing vs lambda or diffraction limit vs lambda
        monochrome_PSF.applyShift(centering(w)) # handles differential chromatic refraction
        mpsfs.append(monochrome_PSF)
    PSF = galsim.Add(mpsfs)
    if interpolate:
        im = galsim.ImageD()
        PSF.draw(image=im)
        PSF = galsim.InterpolatedImage(im)
    return PSF

The centering function for differential chromatic refraction is somewhat complicated, so I won't place it here. The sizing function is pretty simple though:

def sizing(wave_ref, n):
    return lambda w: (w/wave_ref)**(n)

Set n to 1 for diffraction limit chromaticity, n to -0.2 for Kolmogorov seeing chromaticity.

Is there any interest in this for GalSim? or should it be kept separate? I'd be happy to do the coding/testing (grateful even, as it would give me some github experience), but would probably need guidance on the github workflow and where-to-place-code / GalSim-philosophy stuff.

-Josh

gbernstein commented 11 years ago

Hi Josh - there will be similar need for the case where the galaxy components have different spectra so the appearance of the galaxy is also wavelength-dependent. The ambitious interpretation of this issue would be that it could produce images for this case too (again by just defining a few sub-bands across the filter and adding them). The noise could be added just once after the images are summed up.

barnabytprowe commented 11 years ago

Is there any interest in this for GalSim? or should it be kept separate? I'd be happy to do the coding/testing (grateful even, as it would give me some github experience), but would probably need guidance on the github workflow and where-to-place-code / GalSim-philosophy stuff.

I'd be very happy to see this added, and happy to provide any help possible with the modest procedures we've set up here over the year+ this project has been running! Of course, we're still running a little stretched on getting the non-beta GREAT3 out of the door, but I would definitely be very happy to see a GSObject of this type added to GalSim!

reikonakajima commented 11 years ago

Hi Josh---great! I'm looking forward to the chromaticity-affected images. I think the best people to consult on under what GalSim structure to put your code are @TallJimbo and @rmjarvis.

rmandelb commented 11 years ago

Hi Josh - this sounds excellent. I'd be pleased to see this happen and could discuss details with you if you wish. My suggestion is that you take a look at devel/git.txt and devel/credo.txt. The former gives some basic git information interspersed with details of the GalSim workflow (including naming for branches, how to get the issue number in the commit message automatically, etc.). The latter includes the coding guidelines that we stick to (albeit loosely at times).

Of course, please feel free to ask questions on here about any of it, from workflow details to implementation.

rmjarvis commented 11 years ago

Is there any interest in this for GalSim?

Definitely agree that this is something we want in GalSim!

My suggestion for the API would be to have a new ChromaticObject base class from which you could derive various specific kinds of chromatic objects.

The basic object would describe anything with f(x,y,lambda), but the actual implementation of what that function is would be in the derived classes, one of which would presumably look something like what you have here.

But I could imagine others that are simply an SED flux times a single profile, which would be the simple starting point for galaxies with color dependence. We could also have ones that are the sum of two different sed/profiles for bulge plus disk. On the PSF side, I could imagine wanting to have an OpticalPSF model with the aberrations being wavelength dependent. Convolutions of this with the atmospheric PSF. And so forth.

To leverage the existing GalSim functionality, I think each class would just need to be able to return a GSObject at a particular wavelength, doing any interpolation that might be required.

Then in the base class we would just need to work on how to efficiently collapse the lambda-dependent profile down to an observed profile. My guess is that this is most efficiently handled at the draw stage.

The draw command for ChromaticObject could take a wavelength range, and possibly a transfer function. Then it would just draw the object at each wavelength in the range using the add_to_image option that we already have for draw, scaling the flux appropriately given dlambda and the transfer function (if any).

We'd have to think about whether a straight sum like this is sufficient or whether it would be better to effect the integral over lambda using something more sophisticated. Trapezoid or even Simpson's rule wouldn't be too hard, but we might even consider adaptive integration strategies like GKP (what the integ package does in 1-d). My guess/hope is that we don't need this, and that one of the simpler algorithms would be fine.

jmeyers314 commented 11 years ago

Cool. I've read through git.txt and devel.txt, and I think I understand how to get started with creating a branch. I think I need credentials before I can execute git push -u origin '#467', though?

As for implementation, any advice on a starting point? Perhaps existing code that I should read through before trying to build new code?

Mike's API sounds good to me, and I think handles Gary's 'ambition' quite nicely.

rmandelb commented 11 years ago

I think I need credentials before I can execute git push -u origin '#467', though?

Yes. I just added you as a member to the GalSim-developers organization, which I think is all you need to be able to push to GalSim.

I would suggest you take a look at how the GSObject base class and its methods are set up in galsim/base.py, and then how some of the more complex types of GSObjects are set up e.g. in galsim/interpolatedimage.py.

rmjarvis commented 11 years ago

Right. I think you should make yourself familiar with the code in galsim/base.py. This is where most of the foundations of GalSim are. (At least for the python layer. The C++ layer is really the foundation, but I don't think you'll need to dive into that just yet.)

Especially the draw command. It mostly passes off to the C++ draw function, but it does some manipulations to the profiles beforehand. We'll do something similar to draw ChromaticPSFs -- figure out the profile at each wavelength, add any manipulations we need (e.g. rescale the flux), and then call the C++ function.

We might even use different algorithms depending on the attributes of the ChromaticObject. e.g. For some cases it may be faster to call the C++ function once where the PSF is a weighted sum of the PSFs at different wavelengths (i.e. the "effective" PSF), and other times we might be better off drawing the convolved object at each wavelength and summing the resulting image. We can put this logic in the python layer draw function.

The other one I would recommend looking at is galsim/compound.py, since you need to be able to do similar things with the ChromaticObject. It includes Add and Convolve in particular. We'll have to decide whether we want these existing function names to work on ChromaticObjects as well and put the logic for that into the existing functions, or if we want to use different names when we add two ChromaticObjects (or a ChromaticObject and a regular GSObject, which should be allowed).

The other aspect of the API that will require some thought is whether we want the existing GSObject to derive from ChromaticObject. It might make some of the implementation easier if we can use a regular GSObject like a ChromaticObject where the wavelength dependence is trivial. e.g. gsobject.evaluateAtWavelength(lambda) would just return itself. I suspect we could arrange it where these hooks wouldn't adversely affect any existing code.

jmeyers314 commented 11 years ago

Hi all,

After getting some tips from Rachel and Mike yesterday, I've finally made some commits to this issue's branch. This branch now has some rudimentary ability to simulate multi-component galaxies (bulge+disk) with different SEDs for each component, and also chromatic PSFs with effects such as atmospheric differential chromatic refraction, wavelength-dependence of seeing, wavelength-dependence of diffraction limit, and possibly more that I haven't thought of.

A couple questions/comments came to mind when writing this:

  1. Chromatic objects need to know about SEDs and filters. Currently, I just required the user would provide wavelength and flux/throughput arrays with units of nanometers, photons/nanometer, and dimensionless throughput. In principle, we could make this more generic with some classes to handle spectra and filters, for example, one might wish to specify f_lambda or f_nu instead of f_photons as I've currently hardwired the code. Maybe synthetic photometry, extinction, redshifts would also be nice?
  2. For the one test I created so far, I use differential chromatic refraction. I wasn't sure if this was too specific to place in the main galsim directory or not, so for now it lives in the test_chromatic.py file. Should this be placed somewhere more accessible for users?
  3. I also added a test spectrum and a test filter. Should I add and test with some more? Should we move them out of tests to a more prominent location? Or should we not fill up repository space with these?

Finally, I'd appreciate if someone would browse through the code and let me know if I'm breaking any coding standards or anything else bad before I add too much more.

Thanks!

rmandelb commented 11 years ago

Hi Josh -

A few comments, in no particular order:

jmeyers314 commented 11 years ago

Thanks for the comments, Rachel.

Here's a possible task list for this issue before merging:

I've left out items concerning speed of computation for now, but that will also be something we'll want to think about eventually.

There are probably also items I simply haven't thought about yet.

To all, let me know what you'd add or change.

-Josh

barnabytprowe commented 11 years ago

Hi Josh,

I also took a look at the code and it seems very nicely laid out indeed - you have got the style for sure. Rachel is right that it doesn't hurt to be explicit at all times about formats and the purpose of various inputs and outputs. We often get quite verbose in our docstrings I guess, but it's better to have too much that have too little.

I also think your tests are good. One thought I had though was that I found the inheritance relationships between the various classes in galsim/chromatic.py weren't working quite as I expected. Perhaps I was merely confused by the fact that the base class wasn't in fact the class with Base in the name!

Either way, I think that we could think about deeper structure here (along with thinking about how these ought to relate to GSObjects) in a fruitful way. I'll ponder. Mike mentioned a good idea - making GSObjects inherit from these classes in a lightweight way - but I also think that there would be much that could be usefully inherited in the other direction too...

rmandelb commented 11 years ago

This is still without a very careful look at the code, so take it with a grain of salt:

jmeyers314 commented 11 years ago

Hi Barney,

Thanks for taking a look.

... I found the inheritance relationships between the various classes in galsim/chromatic.py weren't working quite as I expected. Perhaps I was merely confused by the fact that the base class wasn't in fact the class with Base in the name!

Yeah, my intention was for "Base" in ChromaticBaseObject to mean "chromatic version of objects in base.py"; but maybe there's a better name for this? Maybe it's clearer to just go with ChromaticSersic, ChromaticGaussian, etc. ?

Either way, I think that we could think about deeper structure here (along with thinking about how these ought to relate to GSObjects) in a fruitful way. I'll ponder. Mike mentioned a good idea - making GSObjects inherit from these classes in a lightweight way - but I also think that there would be much that could be usefully inherited in the other direction too...

Definitely agree that the inheritance structures should be thought through more carefully. My initial attempts at having GSObject inherit from ChromaticObject seem to indicate that quite a lot of code needs to be changed, but maybe I just haven't thought of the right shortcut yet. I'm curious what you mean by having useful inheritance in the other direction...

There are also some interesting design choices for mixing GSObjects and ChromaticObjects, such as how to interpret the wavelength-dependence or normalization of GSObjects once they inherit from ChromaticObject. For example, do we interpret the flux parameter as the total number of counts no matter what filter is specified? Or do we interpret flux as counts/nm perhaps?

I'm also not sure what to do with objects in compound.py. It would be convenient to have the Add class be able to combine both GSObjects and ChromaticObjects, and mixtures of the two, but this means that adding one GSObject to another GSObject would automatically promote it to a ChromaticObject, which I'm not sure we want.

rmjarvis commented 11 years ago

I suspect the cleanest design would be to rely on python's duck typing rather than inheritance here. So when both GSObject and ChromaticObject have the same functionality, they could be used interchangeably. I think we could pull this off without requiring GSObject to inherit from ChromaticObject.

jmeyers314 commented 10 years ago

Hi All,

I've made some progress implementing chromaticity into galsim. In addition to API, I've added more tests, and a demo (demo12.py). I'm curious what people think of the API.

I'm still tweaking the implementation, since some of the tests don't reach the precision goals that I would have a priori set for them (I've subsequently rigged the precision goals so that the tests pass on my computer for now).

One part of the implementation is to draw "effective" surface brightness profiles to InterpolatedImages (see ChromaticConvolve). From some quick and dirty toy studies, I think this part might potentially be improved by drawing the effective profiles to higher resolution grids, and/or increasing wmult for these intermediate calculations.

Another area that could be potentially improved is moving from Riemann sums to trapezoid/Simpson's/or-even-fancier rule for integration over wavelength, though I think this would mostly be just for speed, as accuracy can always be increased (at a cost in speed) by decreasing the differential wavelength interval in the Riemann sum.

If anybody has other ideas or comments, please let me know!

-Josh

rmjarvis commented 10 years ago

Hi Josh. This is great stuff. I've starting looking at it, and I do have a few comments/suggestions.

  1. I'm not sure I like the ChromaticGSObject class taking the GSObject class name as an argument. It seems like it would work just as well taking an already constructed object as an argument. i.e. instead of
gal = galsim.ChromaticGSObject(galsim.Sersic, wave, photons,
                                   n=1, half_light_radius=0.5)

you would write

mono_gal = galsim.Sersic(n=1, half_light_radius=0.5)
gal = galsim.ChromaticGSObject(mono_gal, wave, photons)

It seems like this would be more intuitive as well as more flexible. e.g. You might want to do other things to the mono_gal after building it and before passing it to ChromaticGSObject.

  1. The name ChromaticGSObject is a skosh unwieldy, imo. It could I think be shortened to just Chromatic. So gal = galsim.Chromatic(mono_gal, ...) That seems clear to mean add chromatic dependence to an otherwise non-chromatic object.
  2. I think it would be possible and useful to modify the basic Add and Convolve classes to identify when one (or more) of the list elements is a ChromaticObject and automatically return a ChromaticAdd or ChromaticConvolve in that case.

    In fact, writing that out, I realize that it is a bit weird to have class names that are verb, which tells me that we should probably change Add to a function that returns either a Sum object or a ChromaticSum object in the two cases. Likewise Convolve should probably be a function that returns either a Convolution or a ChromaticConvolution.

    I think this would be backward compatible with current usage, since the python class construction syntax is identical to its function syntax. final = galsim.Convolve([psf, gal, pix]) works whether Convolve is a class or a function. And I think it would be more intuitive in the chromatic case.

  3. I noticed that you have bulge + disk*5 work correctly in demo12, which is good. However, I suspect it would not work if bulge were a regular GSObject rather than a ChromaticObject. I think all you need to do to enable this is to add __radd__ to the ChromaticObject class.

    OTOH, if we switch to the Add-as-function paradigm, I think that might also make it work.

  4. The name ChromaticShiftAndDilate is also pretty unwieldy, and not necessarily obvious how to use right off the bat. I think it's fine as a class name, but it would be useful to have a driver function that the user would normally call, rather than have to use this class directly.

    For example, the DCR module could have a Refract function (or maybe AtmosphericRefract if refract is deemed too ambiguous, despite the name becoming rather long again) that could be used to create an atmospherically refracted version of a particular object. So the user might do something like this:

psf_500 = galsim.Moffat(beta=2.5, fwhm=0.6)
psf = galsim.Refract(psf_500, zenith_angle=30*galsim.degrees, base_wavelength=500)

which would do all the steps currently done in Part C of demo12.

  1. It strikes me that the base_wavelength idea, which is implicit in the calculations you make for the DCR psf in demo12, and which I made explicit in my suggestion here, would be a useful one to put into ChromaticGSObject as well.

    You currently state that "Flux normalization is set such that the Riemann sum over the wavelength and photon array is equal to the total number of photons in an infinite aperture." I don't quite see where that normalization happens. Probably it is implicit in how you use it I guess. But I don't think that's always the right normalization to use. Is this integral guaranteed to converge? I think at least sometimes, people will want to normalize to a particular wavelength rather than the integral.

    So I would recommend having the base_wavelength be an option there as well. Maybe offer the option to normalize the given object either at a given wavelength (in a flux density kind of meaning) or with the overall integrated flux.

  2. I also think the ChromaticGSObject constructor would benefit from having an SED class that would encapsulate the functionality of describing the (relative) flux as a function of wavelength. You currently have the SED be provided as wave and photon arrays, but I can easily imagine that someone would rather specify the flux density as a function of wavelength, rather than enumerating it a particular wavelengths.

    You immediately recast the wave and photons arrays a self.photons = galsim.LookupTable(wave, photons), so it would be pretty easy to allow the user to provide a function, since that's basically what the LookupTable class is simulating.

    So I would suggest having an SED class that can take either wave and photons or a function or possibly other things that we might eventually find useful. Then ChromaticGSObject would take an SED as an argument.

    It's a pretty shallow wrapper, I admit, but it seems like a useful abstraction.

Well, I guess that's enough comments for now. Like I said, this is really great stuff. I'm very excited that this is progressing. :) I think it will be really useful.

jmeyers314 commented 10 years ago

Hi Mike,

Thanks for looking at this. Here are some summaries of your suggestions and some replies.

  1. Rewrite ChromaticGSObject constructor to accept a GSObject instance instead of class name. I had been considering this too and have now made the switch. I've left in the ChromaticGSObject methods applyShear, applyShift, etc for now, though I could also be persuaded that these operations should only be done to the GSObject before being chromaticized.
  2. ChromaticGSObject renamed Chromatic. Sure, done.
  3. Add renamed Sum, and Add reintroduced as a function. Similarly Convolve/Convolution. This is brilliant! I was having all sorts of issues trying to get the old Add to figure out what type of object to become when I first started trying to chromaticize galsim. Even when you have GSObject inherit from ChromaticObject, it seems quite difficult. Making Add a function solves this though. Thanks! This is now implemented. Only issue was I had to comment out GSObject.__iadd__ and GSObject.__isub__ since it's now possible for a GSObject to mutate into a ChromaticObject through the += operator. I left Deconvolve, AutoConvolve, and AutoCorrelate alone, but I suppose for consistency we should change them too.
  4. ChromaticObject needs a __radd__ method. As you said, I think the Add as function paradigm eliminates this.
  5. Driver for ChromaticShiftAndDilate. I think Refract could work for DCR, (I suspect galsim.dcr.Refract probably is clear enough) but demo12 actually does DCR + chromatic seeing. Perhaps ChromaticAtmosphere to describe both? The base_wavelength idea is a good one. Not implemented yet.
  6. Use base_wavelength for normalization of Chromatic. I think the normalization you quoted above is actually a lie. (Whoops!) The normalization, in terms of the number of counts that eventually get drawn into an image, depends both on the ChromaticObject and the filter being drawn through (or demo12 Part A wouldn't work.) The easiest implementation of normalization is probably something like "counts per nm at wavelength X, assuming 100% transmission for the filter function". With more sophisticated SED and Bandpass classes, one could probably figure out magnitudes or counts. Total image count normalization will require both the ChromaticObject being drawn and the filter being drawn through, which I think is a good approach to take. One question is whether to use just the SED + Bandpass for normalization, or to also use the Chromaticized GSObjects flux attribute? I.e., do the following produce the same number of counts or a different number of counts? Currently they produce a different number of counts, even though the SED and filter are the same.

    final1 = galsim.Convolve([galsim.Chromatic(galsim.Gaussian(flux=1, fwhm=1), sed), psf, pix])
    final2 = galsim.Convolve([galsim.Chromatic(galsim.Gaussian(flux=2, fwhm=1), sed), psf, pix])
    image1 = final1.draw(bandpass)
    image2 = final2.draw(bandpass)
  7. Make an SED class. Yup, this was always something I figured we'd want eventually. I've started a very rough one. Similarly, a Bandpass class might also be useful.

-Josh

rmjarvis commented 10 years ago

I left Deconvolve, AutoConvolve, and AutoCorrelate alone, but I suppose for consistency we should change them too.

When you get around to writing the chromatic versions of these, it will be worth doing. But low priority for sure.

I.e., do the following produce the same number of counts or a different number of counts? Currently they produce a different number of counts, even though the SED and filter are the same.

I think either is fine, as long as it is well documented. Basically you have to decide whether the sed parameter just defines a relative flux density distribution that is normalized in some way to the total flux or band pass flux or whatever, or whether it supersedes the existing flux and fully defines the flux density of the object.

As long as it is well defined (and has good ways to update the overall level to match someone's desired normalization), I think we'll be ok.

rmjarvis commented 10 years ago

FYI. I just merged in the current master. Save you from having to resolve the conflicts.

Not too many. Image(..., pixel_scale) now needs to be Image(..., scale=pixel_scale). And the dx in draw routines is now scale.

jmeyers314 commented 10 years ago

Pull request has been merged. Closing this issue.