DES-SL / BlueRings

An extended re-implementation of Gavazzi et al's RingFinder galaxy-scale strong gravitational lens detection robot. Subtract red light from blue, analyze residuals, classify.
MIT License
1 stars 3 forks source link

Basic residual image analysis #9

Open drphilmarshall opened 8 years ago

drphilmarshall commented 8 years ago

James Nightingale has made some progress on this: let's invite him to check in his code, so it can be assimilated. Would you mind contacting him, please @tcollett ? I don't know his GitHub username.

drphilmarshall commented 8 years ago

A search for James Nightingale on GitHub draws a blank. @tcollett if he can log in and reply to this issue, I'll make him a collaborator so he can direct push.

bnord commented 8 years ago

I've got his info. Can you issue me contacting him/adding him?

drphilmarshall commented 8 years ago

@bnord - done! See #13

bnord commented 8 years ago

Hi @JamesN2211 , Could you take a look at this issue?

JamesN2211 commented 8 years ago

Hi all,

I'm still getting used to Git, so if it sounds like I'm missing anything then point me in the right direction!

I developed a Matlab code which tried to improve on the island finding approach of Gavazzi's RingFinder. Basically, it (linearly) mapped the residuals into circular annuli and then performed various fast algorithms on those annuli to look for extended arc-like features, multiple images, or anything 'lens like'. It ran pretty well on Tom's simulations and was pretty much as fast as you can hope for.

I'm a bit more familiar with Python now and would be happy to try find some time (Thesis writing atm, so time is a commodity!) to convert the Matlab script to Python for BlueRings. I had a lot of ideas of other features that may improve it too, as I'm sure once proper testing begins we'll see that contaminants throw up a lot of unexpected problems.

drphilmarshall commented 8 years ago

Hi James :-) This sounds great, thank you! Sounds like your circle (Hough?) transform is a somewhat similar approach to the one taken by Courbin's group with their PCAfinder, is that fair? How were you centering the circles? And what was your strategy to working with the three (g-z,r-z,i-z) residual images?

Interested to see what comes out of your routine. If your matlab code does not take up much space, and is readable, would you mind checking it in, please? That way we can see what you were doing, and can help you with the python translation as you go.

Also, regarding git and GitHub: you might find this little video tutorial and FAQ useful:

https://github.com/drphilmarshall/GettingStarted/blob/master/README.md#top

JamesN2211 commented 8 years ago

I hadn't read that paper in a long time, but both approaches begin pretty much the exact same. However, I would say that mine differs after the polar transform, as it essentially tries to employ island finding techniques (like Gavazzi's work) but in a polar coordinate system which can thus better exploit symmetries in the 'islands' to look for lens-like features.

I generally used the brightest lens image pixel for the center (with a scheme that went to half-integer accuracy). This can be improved by non-linearly iterating the center to give the best covarage of the residuals, at the expense of a longer run time (0.1 seconds to 1-5 seconds).

The method currently analyzed each residual image separately and I figured we'd come up with some sort of combined likelihood function / figure of merit at the end. Unified approaches could probably offer a big improvement though.

I think its probably best I show you how it works (I made some decent images for previous DES talks), so will get those uploaded tomorrow with a brief description and overview. The code is (hopefully) pretty readable too, and fairly easy to operate, so I'll make sure that goes up too.

JamesN2211 commented 8 years ago

Okay, so I've attached some images to give a run down of the residual image analysis. These begin by showing the residuals that are leftover from a normal image differencing analysis, and I've chosen a simulation with a doubly lensed source (diffimlens21cirgrid.png).

diffimlens21cirgrid

Basically, a grid of circular annuli is laid over the square pixel grid on which the image is observed, and the flux in each circular annulus is computed. This is computed as the sum of all square image pixels fluxes with which the circular annulus overlaps, multiplied by the fraction of overlap in each pixel (Therefore, one circular annuli's flux is total flux in the full circle going round). This is normalized by its overall surface area. The basic idea is that we look for the circular annuli with a residual excess.

Once we have found a set of annuli with residual excesses, we merge them into one large circular annulus, and discard the rest which didn't contain enough of a flux excess. This is shown below, where the annulus remaining contains all the residual flux:

diffimlens21segs

We can also now see that this large annulus has been segmented into 32 segments. We now calculate the residual flux in each segment, by again using each's area of overlap with the square pixels the image is observed on (they're all the same area so no need for normalization). Finally, we've decomposed our residuals into a 'straight line' and can look for patterns indicative of lensing. For example, in the image below, we've overlaid segments where a residual flux excess is again found: diffimlens21segflux

The idea is from here, its fairly simple to put sensible lensing selective criteria. For example, you could require that the two segments shown above are offset by ~180 degrees. Or, in the case of extended arcs, you could require they span > 180 degrees. In more detail, you can look at their residual flux distribution, and look for magnification patterns suggestive of lensing, like below:

diffimlens21fluxprof

And that is about where I got to. I think the good thing is its an improvement on island finding in square pixels, as it makes it fairly easy to look for the patterns / symmetries indicative of lensing. It's also very fast as it keeps everything linear and therefore has plenty room for improvement, computationally speaking.

The downside is it'll probably still be very hurt by contaminants, and it may require a dangerous amount of tuning to get rid of the contaminants. I think this was always developed as a 'first step' residual scanner to get us a highly pure but heavily contaminated sample, and we'd move to something more rigorous from there.

I've attached another example below, showing an extended arc lens:

diffimlens1cirgrid

diffimlens1segs

diffimlens1segflux

diffimlens1fluxprof

I'll get the Matlab code uploaded now, and am happy to hear questions / queries / ideas from here!

JamesN2211 commented 8 years ago

The method is uploaded, I couldn't see an obvious way to stick it all in one folder, so its just the main DIR. If anyone can clear that up so its all together that'd be super!