Open gnthibault opened 5 years ago
Hello, thanks for taking the time to look at the project!
Yes, I did this project as part of my PhD thesis and had worked on some topics related on the PSF estimation, that unfortunately, did not found the time to put here. I would much appreciate if you could test the code, on my side I can expand the documentation to explain how to estimate and reconstruct the PSF at every location.
The original intention for this topic is to clarify the PSF as a mathematical function and later on, what is actually Properimage doing to calculate the PSF.
I can put some effort on expanding this later this week if you can wait! Would like to have some feedback, so feel free to write more issues or add comments on this thread.
That is great ! Of course I can wait. I finished reading the 2002 paper, the concept is looks pretty neat and effective :)
I will try to use you code to see if I understoof the paper correctly, and see if I can build the estimation of the psf on my own to start with. At least now I know what to look for in the code: -extract n star centroid coordinates -bounding box size estimate -bounding box extraction (patches) -every patch is considered a vector, then perform pca on this -select k first orthogonal patches -reproject all the original patches on the k basis vector -extrapolate from the n star position coordinates, to whole set of pixel grid (that is what I think is the most complicated part no ? the paper suggest strongest proximity in the radial direction)
Know I can get for each pixel, a reconstruction of the psf given the k first orthogonal vector. From there I can maybe estimate a 2x2 gaussian with a simple GMM model. Then extract some metric from there (ellipticity for instance...)
And of course, at the end, perform k instances of the richardson lucy deconvolution algorithm. I will look for how you implemented these steps in your code. Pointing me to the right file/direction would really help.
In the meatime I will play with the code.
Thanks agains for your help !
Edit: ok after a bit of testing, have more precise questions for you. Lets take those few statements:
with single_image.SingleImage(image, smooth_psf=False) as sim:
a_fields, psf_basis = sim.get_variable_psf(inf_loss=0.15)
x, y = sim.get_afield_domain()
print("Projected psf model of size {} from the {} first eigenvectors".format(
psf_basis[0].shape, len(psf_basis)))
print("Reconstructing the psf map of size {} from {} coefficients".format(a_fields[0], len(a_fields)))
outputs:
Projected psf model of size (25, 25) from the 4 first eigenvectors Reconstructing the psf map of size Model: Polynomial2D Inputs: ('x', 'y') Outputs: ('z',) Model set size: 1 Degree: 3 Parameters: c0_0 c1_0 ... c2_1
------------------ ---------------------- ... ---------------------- 0.0849443412808245 -6.358415362390864e-05 ... -4.703254953232475e-11 from 4 coefficients
As you can probably tell, I made the assumption that a_fields[0] was actually a scalar field (one coefficient per image pixel) although it looks like it is a polynomial model that need to be discretized.
That is actually a very good design, because one can then choose a more or less coarse grid on which to evaluate the psf. I guess I'll found the discretization method inside the plot module.
Very nice !
Ok, cool I think I managed to do what I was looking for, ie, fit a bivariate gaussian on reconstructed psf. However, I now need to understand this statement:
a_fields, psf_basis = sim.get_variable_psf(inf_loss=0.05)
What is the inf_loss parameter ? can it be found automatically ?
Thank you in advance for your help!
Now I understand the limitations of my current approach:
-It is not clear how the bouding box for start patch selection is chosen. On my test, the patch size was far to big, including sometimes multiple stars -It is also not clear how many stars from the image are chosen. For my usecase I would like to be able to select, for instance 80% of the brightest stars
Hi! I can help you understand this. I have a few notebooks on reconstructing the PSF. Let me upload those. I changed machines so I hope I can find those notebooks. If not, bear with me unitl the weekend and I will dedicate more time to this!
Basically the information loss parameter is a cut in the normalized sum of coefficients of the expansion. A simple article on this >> http://articles.adsabs.harvard.edu/pdf/2017BAAA...59..209S
Its in spanish, but I can put the notebooks in the docs this weekend.
I am so thankful ! thank you very much ! I have also started a notebook on this very topic (basic level though): https://github.com/gnthibault/Optimisation-Python/blob/master/SpatiallyVariantDeconvolution.ipynb
For the moment, I am using a simple method of moments to estimate the center ($\mu$ parameter, and the $\Sigma$ covariance matrix.
My main problem is that the data used to compute the covariance matrix suffers from "outliers", ie pixels that are not actually part of the PSF. The patch I used is simply the reconstruction obtained from the basis vector * coefficients provided by your library.
I tried a simple rule to threshold out pixels values lower than 5% of the maximum, but it doesn't really solve the issue.
On the data I have, I end up with good examples:
But also very bad ones:
Overall, my FWHM estimation is not as usefull as it should be. I will check out the math from the spanish paper (I cannot read spanish ^^), but I am definitely ok to wait for the weekend. Thank you again for your support.
Ok, looks like a good approach would be to compute the "goodness of fit" either based solely on the residual or using a statistical test (bayes factor ? something more advanced ?) to either reject some specific pixels or the entire patch. I guess you already had to face similar problem, I am really looking forward to your view on this.
Thank you in advance for your help !
Well, I have put some work on the repo this weekend, although I didn't advanced much in PSF estimation. Will continue pushing this week.
As a relevant piece of news, I dropped support for Python 2.7, I had several testing issues with tox
and could not fix them, and since in Python3+ this is actually working pretty well I decided not to waste time for Py27 is currently 3 months away from being completely dropped.
Regarding your question, you should select a group of stars that don't saturate the detector, and also, with a good S/N ratio. Also, reject those sources with blended neighbors, there are flags to do this in the extraction stage, I don't know if you use SExtractor or a similar soft, these should notify you somehow of blending issues. Also, try to smooth the wings of your star sample, by this I do not mean to smooth it straightforward, but you can have an educated guess of the best kernel size, and see if your wings fade low enough.
I invite you to look at this part of the code best_sources where this selection is carried out.
I also want to show you that I currently have a Lucy Richardson deconvolution routine in the utils
module: Lucy-Richardson deconv. I will also document this further.
Sorry for the incompleteness of my documentation, I will try to push this to make it better. Best!!
I also added a last block of code in the documentation for advanced single image (tutorial 3 I think), and it performs a positional determination of PSF, using the variant estimation, and plots isophotes over the actual stars of the field.
It is not perfect, but it kind of illustrates what is actually happening, and also you have in there the recipe for building the PSF anywhere in the image.
Ok from what I saw from tutorial 3, I think I already managed to find the coefficient map and reconstruct the psf at arbitrary location following your tutorial or looking at the code before.
I looked at sextractor/psfex a bit in the past, but I found those tools way to heavy and cumbersome to be used in my context. That is was I was actively looking for a code like yours for star detection / patch build...
All the examples I showed before are taken out of a notebook I worked on based on you tutorials.
What I was missing though is the background substraction, and it is definitely a very valuable feature I was missing from this code. I will try to use it asap and report my results here ! I am really looking forward to it.
Ok, something I didn't mention from your previous comments is that the implementation looks like PCA on the pixels of the stamps, but it actually performs a Karhunen-Loeve decomposition which I think is a bit different (just a bit).
Ok here is what I got checking few examples of source detection with tutorial 3 example. That correlates with what I was seeing in terms of reconstructed psf:
There is definitely a problem in terms of src detection, the patch is way too big. I will take a look in the best_sources method you suggested: https://github.com/toros-astro/ProperImage/blob/5653dd2c99fd2b96a4f7bd4c1b371eaf613e07d4/properimage/single_image.py#L382
I'll try to see if the src detection/patch size estimation based on the background substracted image (this might help if not).
Ok, this looks god, and bad. On the bright side, I see isophotes drawing your sources profile. On the bad side, it's fitting the noise, and this shouldn't be happening.
I was discussing with a collaborator if the functionality would be worth to move to a standalone python package, and wanted to know if you would find this a better approach.
This psf reconstruction is done using properimage single image class and methods? Or is this an implementation of your making?
If it's properimage, I would ask to you if you can send me the image you are using so I can reproduce this behaviour. Thanks.
Ok I will document my trial. In the meantime, can you check the _bkg versus __bkg atteibute in single_image class ? In my understanding there might be a bug there.
Hi, I am not sure of the bug situation.
This _bkg
and __bkg
are both part of a property, and one of them has more to do with the setter and the masking section. I do feel this is poorly engineered, but this is because my lack of engineering knowledge.
If there is any misbehaviour though, I am glad to reproduce it and see if needs refactoring.
Also, if you could help me improve this section of the code, we can treat it as a separate issue, I'll be glad to receive any kind of suggestions.
Ok I did not realized it was on purpose, I will check again then because at some point I saw that source detection was maybe not using background substraction beforehand.
Anyway, if you have time, can you take a look at this image, it really shows the main issues I am experiencing with psf estimation with properImage:
Hi, I will take a look at it. From visual inspection I guess this image has spherical aberration, and this may depend on the wavelength. This is the reason why stars look oval shaped, and at the same time their major axis pointing towards the center of the field. I would say this is a great example to modify the polinomials, and make them radial basis functions. That modification could improve this.
Is this a stack of three images on three different bands? Could you send me the original files?
Hi @BrunoSanchez and thank you for your feedback. I used this very simple RGB image (non calibrated filter) to check if the computed psf ellipticity was following a radial pattern. However what I ended up with was something quite different:
Using a radial basis function is definitely NOT what I would like to do, because I would like to use this tool in order to diagnose wether the optical alignment of the system is good or not. But now that you mention it, I am starting to realize that the xy polynomial you use to model the psf parameter distribution might be too coarse (high bias).
A knn regressor might me much more efficient for what I am trying to do. But this is only part of the problem. I still need to fix the star patch sampling issue, that used way to big patches.
I'm trying to load your image, but wanted to know exactly, which python package did you use for loading it into an array? Pillow? Scikit image?
Besides, you can actually impose your own stamp shape, it is an option in the SingleImage class, and you should put there a tuple saying the pixel size in this format: (dx, dy)
.
from skimage import io array = io.imread('path/to/data.jpg')
Even when creating properImage with stamp_size, I get a "updating stamp shape to (15,15) "
https://gist.github.com/BrunoSanchez/f3f1b0a33890e98dabe5dcc92859e0c4
I got this far. Let me continue this tomorrow.
I am investigating what information can be obtained from sep directly as well, as I did not really knew this module before (just read a bit about the big sextractor documentation). Apparently they directly provide the ellipse estimation for each source, which in turn I can use to find the axis aligned bounding box
Coming back here just to mention that future plans for properimage project involve separating the PSF estimation to a separated module, and we would mantain both of them separatedly.
Amazing, I was about to get back to this problem, as a friend advised me to tryout moffat vs gaussian. I'd be happy to check-out the future repo. Good luck
Dear developer, thank you very much for this awesome project ! I started to read the mentionned paper (the 2002 one about spatially varying psf, and the newer ones about statistical hypothesis testing on differntial imaging).
It is really outstanding work ! Thank you very much for making it available here !
I am not done yet on reading the paper about the psf decomposition, but I was wondering if it was possible to get a reconstruction of the psf at various location of the image, such that one can then compute some quality metric maps across the image.
Which for instance would include: -anisotropic gaussian matrix estimation and from there: -fwhm map -ellipticity map
I was eager to see what will be in https://properimage.readthedocs.io/en/latest/topics/PSF.html But there is nothing yet, what do you intend to put there ?