Closed arunkannawadi closed 7 years ago
Pinging @craiglagegit so that discussions and updates can happen here.
Oh right, I forgot you had already opened an issue. That's convenient. :)
@craiglagegit I made a branch over here called '#722' (which is our usual style of branch naming -- using the issue number) and cherry-picked your code over to here (50fa1b98840bcaa7a9eae149d9f85f8bc0ef38eb). I also added you as a collaborator. This will make it a lot easier for us to work with you to get this properly incorporated into GalSim, since we can all edit and push to this branch.
Some things that I think need to be addressed before we merge this into master:
PhotonArray::addTo
function.drawImage
to do the necessary steps (differently for method='phot'
vs other methods).drawImage
so it wouldn't have to read the file each time.@arunkannawadi if you want to start tackling any of these bits, feel free. I won't have time to for a while, but I'm happy to help with design questions.
@rmjarvis - Thanks Mike, you have already answered some of my questions. I agree it is not ready to be merged into the main branch - I was just trying to show feasibility. Here are some more questions/comments:
(1) Should I create a new method='phot-bf' in drawImage which calls a new PhotonArray::addToBF? (2) I'll change to using the rng function. Is there also a random gaussian function which will draw from a normal distribution? (3) Are there routines to read a text configuration file and pick out name-number combination, regardless of the order in the file? (4) I agree we only want to do the silicon constructor once at the beginning, but I'm unclear where I should put it. Please give me some advice there. (5) I'm afraid I don't understand your comment about wrapping the bare pointers. Can you suggest an example I can copy? (6) If I want to store the pixel shapes and only update them every N electrons, where should I store that information? Add it to the BaseImage class?
Thanks, Craig
(1) Should I create a new method='phot-bf' in drawImage which calls a new PhotonArray::addToBF?
I think rather than a method, let's make it a kwarg. Maybe silicon_model
which will take an object that is the python version of your Silicon
class. Then we could imagine having different models of the silicon that do things differently.
Then if method='phot'
, it would be able to pass this directly to drawShoot
and eventually get it to PhotonArray
to handle properly.
Then addTo
could take a (shared) pointer to a Silicon object. And if it is NULL, do the old functionality of rounding to the nearest pixel. But if not, it could call out to a method of this class that could do what you currently have in the addTo
function. That would provide some nice encapsulation of the functionality and provide an easy way to add other Silicon models down the road if we deem it necessary.
(2) I'll change to using the rng function. Is there also a random gaussian function which will draw from a normal distribution?
The code that calls addTo
is SBProfile::drawShoot
. It already has one of our random deviates, called ud
, since it is a UniformDeviate. The one that does normal distributions is called GaussianDeviate, but you can make one from the other and they will share the random number stream.
So you can add ud
as an argument to addTo
and convert. The code would look something like this:
template <class T>
double PhotonArray::addTo(ImageView<T>& target, UniformDeviate ud) const
{
GaussianDeviate gd(ud, mean, sigma);
...
x = _x[i] + DiffStep * gd() / 10.0;
(3) Are there routines to read a text configuration file and pick out name-number combination, regardless of the order in the file?
Sure. It looks like ConfigParser would work to read your config files. If you want to see an example of a class that is basically just configuration information that we set in python and then use in C++, you can look at either GSParams or HSMParams for examples.
(4) I agree we only want to do the silicon constructor once at the beginning, but I'm unclear where I should put it. Please give me some advice there.
Similar to GSParams or HSMParams. We'd want to read the file and set it up in the python layer. Then pass it to drawImage
to send it down to the C++ layer.
(5) I'm afraid I don't understand your comment about wrapping the bare pointers. Can you suggest an example I can copy?
Wherever you have a new
command, you should put it into a shared_ptr. Also, rather than lists of pointers, I'd use std::vector
std::vector<boost::shared_ptr<Polygon> > polylist(NumPolys);
for (int p=0; p<NumPolys; p++) {
polylist[p] = boost::shared_ptr<Polygon>(new Polygon(Nv));
...
One advantage is that you don't have to delete them. The shared_ptr does that for you. Also, you can safely pass them around, e.g. as return values from functions. When the last copy goes out of scope, that's when it will be deleted. And finally, all Python objects are basically shared_ptrs, so it turns out this makes it easy to pass things between Python and C++ without memory leaks.
(6) If I want to store the pixel shapes and only update them every N electrons, where should I store that information? Add it to the BaseImage class?
I'd make that a parameter of this SiliconModel object that stores parameters about how the BF effect should be used. So someone could set silicon.nphotons_per_update=1000
(or whatever) in the python layer. Then in the C++ layer, you can query the value to find out how many photons to shoot between updates.
Mike - Thanks for the detailed response. I think I have most of what I need. This is the area where I still have a question. I need to store not just the parameters of the model, but the actual shapes for each pixel in the image. So if we are shooting an Nx x Ny postage stamp, I need to store 2*Nvertices (say 72) doubles for each pixel. I thought it would be more logical to have these values associated with the image, rather than in the SiliconModel object, but I guess I can just create a data structure in the SiliconModel image that stores these values. I'll plan on doing this unless you tell me otherwise.
I think rather than a method, let's make it a kwarg. Maybe silicon_model which will take an object that is the python version of your Silicon class. Then we could imagine having different models of the silicon that do things differently.
I like this idea. One of the things that was bothering me about the idea of having it as a separate method is that eventually one might want to do something similar in spirit as a way of describing multiple other detector effects (like tree rings). We don't want to have to have separate methods for every single possible detector effect that is treated by moving around photon paths, and all of their combinations.
Dear All,
- As you suggested, we want BF to be optional, so we need this to be a separate function, not a replacement for the normal PhotonArray::addTo function.
- We'll also want a way to do this when we are doing image generation using FFTs. Maybe draw the image, then make an InterpolatedImage out of it and shoot that down the rest of the way through the silicon.
This is actually what I had in mind when Craig and I talked about it. I thought this was basically what was being done now...
I think we want to separate the functionality so that the we can do the proper diffractive optics either by FFT or numerically by sampling and then record the results along with the chromatic information. Then, we should sample those distributions to do raytracing (or fast lookup of the external raytracing package) into the silicon. For speed it is possible to do this in blocks of 10,000 like was done in PhoSim.
- Probably implement this in drawImage to do the necessary steps (differently for method='phot' vs other methods).
So, to the extent possible, we should separate these steps, build the true positions of the arriving photons in memory and then sample them for raytracing into the CCD so as to build up the dynamic effects correctly.
Does that fit into the current paradigm?
I think rather than a method, let's make it a kwarg. Maybe silicon_model which will take an object that is the python version of your Silicon class. Then we could imagine having different models of the silicon that do things differently.
I like this idea. One of the things that was bothering me about the idea of having it as a separate method is that eventually one might want to do something similar in spirit as a way of describing multiple other detector effects (like tree rings). We don't want to have to have separate methods for every single possible detector effect that is treated by moving around photon paths, and all of their combinations.
Not that we shouldn't have the ability to swap in and out different models (we should of course), but just to fill out the conversation a bit: for what we are doing here we were hoping to represent all of these effects in a single physical model in GalSim (like we do in PhoSim for example). We need to control them separately (so we need a way to configure the class or method that Craig is writing to individually turn things on and off) but this model needs to include tree rings, diffusion, AR coatings, BF etc, saturation etc. Some of them can be linked together through (for example) diffusion so we want to treat them together in a physically motivated and consistent way. We will validate this with lab data. Then we want to follow this with a separate module which will simulate the read-out chain.
For those who don't know: Craig has a physics based electrostatic solver based on actual layout information. For speed purposes we are doing some interpolations on that model now rather than just using the model itself. Currently only brighter-fatter is implemented but eventually we want to reintroduce all of the other physical effects we are modeling in our other tools.
I think it's fine to implement this as a photon shooting extension as Craig has done so far.
Then when using photon shooting for the main image rendering, the photons have their exact positions, which are then propagated into the silicon. When we want to do FFT rendering, we can do that onto an image the normal way (possibly with smaller pixel scale than what we eventually want), build an interpolated image out of it and photon shoot that through the same module. I'm pretty sure I know how to write those steps in the python layer. So it's fine for Craig to focus on getting this working via drawShoot.
Doing this all chromatically is a different issue. We don't have any chromatic stuff for photon shooting currently. But even when we do that, I still think we'll want this to be a photon shooting module that we will run from FFT-drawn images if that's how we want to do the image rendering.
I'd make that a parameter of this SiliconModel object that stores parameters about how the BF effect should be used. So someone could set silicon.nphotons_per_update=1000 (or whatever) in the python layer. Then in the C++ layer, you can query the value to find out how many photons to shoot between updates.
Mike - Sorry for the repeat of above - still learning how to use github. Thanks for the detailed response. I think I have most of what I need. This is the area where I still have a question. I need to store not just the parameters of the model, but the actual shapes for each pixel in the image. So if we are shooting an Nx x Ny postage stamp, I need to store 2*Nvertices (say 72) doubles for each pixel. I thought it would be more logical to have these values associated with the image, rather than in the SiliconModel object, but I guess I can just create a data structure in the SiliconModel image that stores these values. I'll plan on doing this unless you tell me otherwise.
Doing this all chromatically is a different issue. We don't have any chromatic stuff for photon shooting currently. But even when we do that, I still think we'll want this to be a photon shooting module that we will run from FFT-drawn images if that's how we want to do the image rendering.
Yes. Thanks. After you comments I tried to look a little more carefully at what was happening with the photon shooting. The paper I started to look at says the photon shooting is sampling from surface brightness. But, is that after you do a analytic convulsion with one of the built in galaxy shapes? In other words does the photon shooting of a galaxy include the diffractive effects of the pupil? If so can the be calculated numerically and stored for an arbitrary aperture with a galaxy shape to sample the brightness from?
Also, I assume this approach can't work when we include Josh's atmosphere and Aaron and Michael's aberrated optics since this relies on working in Fourier space. Is that right?
Doing this all chromatically is a different issue. We don't have any chromatic stuff for photon shooting currently. But even when we do that, I still think we'll want this to be a photon shooting module that we will run from FFT-drawn images if that's how we want to do the image rendering.
One more comment/question on this part:
With #540 it looks like we would have the chromatic info we need. But, in the meantime, could we store a memory object which is the band-pass multiplied by the object SED so we can sample from it to get the proper chromatic response in the CCD on average? Especially in the y band there is a big difference in the conversion depth as we pass through wavelength.
@craiglagegit Another structural request: To the extent possible I think it is good to make highest level of interface into your C++ code as general as possible. Ideally if needed, we could just swap in the the full simulation code without changing the interface. I know it is too slow now, but one could imagine doing this for specialized tests and we could also work on optimizing it and maybe even get some expert help on making it faster if it was worth it.
That would be ideal. But, even if we can't do that and we need to rely on interpolation based on the dynamic pixel boundaries I think we want to preserve the flexibility of position dependence due to edge effects etc as we discussed this morning in the SAWG meeting.
Thanks!
Hi all -
Sorry to throw yet more things into the mix, but I have a comment and a question:
I need to store not just the parameters of the model, but the actual shapes for each pixel in the image. So if we are shooting an Nx x Ny postage stamp, I need to store 2*Nvertices (say 72) doubles for each pixel. I thought it would be more logical to have these values associated with the image, rather than in the SiliconModel object, but I guess I can just create a data structure in the SiliconModel image that stores these values.
Do these need to be stored long term? I thought these would be a local variable during the addTo
process of taking a PhotonArray and adding them to an image. I didn't think these pixel shapes would be needed beyond that.
The thing that I was suggesting be in the SiliconModel structure was just the number of photons to use between updates. I thought that was what you were asking about.
In this fast lookup scheme the shapes tell you for a given current pixel height where the boundary between staying in the pixel you are in and moving to the next pixel is. So they need to be stored or accessible.
Do these need to be stored long term? I thought these would be a local variable during the addTo process of taking a PhotonArray and adding them to an image. I didn't think these pixel shapes would be needed beyond that.
No, they only need to be stored during the photon shooting process. Once the shooting is done and we have an image, I don't think we need them any more. But if we call addTo multiple times to build up an image, we would want to keep the distorted pixels coordinates from one addTo call to the next, assuming these calls are on the same image.
The thing that I was suggesting be in the SiliconModel structure was just the number of photons to use between updates. I thought that was what you were asking about.
Right. I agree the SiliconModel structure is the right place to store those parameters.
Arun had made a suggestion (I think in the e-mail thread, before we moved to GitHub) that I think is worth considering. His suggestion was that instead of waiting N photons before updating the calculation of the effective pixel boundaries, the updates happen based on when photons land in proximity to the edges.
I don't think this is quite right. The pixel shape depends on the total charge stored in the pixel and is independent of where in the pixel the photons landed. Arun's suggestion is a good one for deciding whether a photon stays in the undistorted pixel or moves to a neighboring pixel. If it lands near the center, then it is unlikely to move to a neighboring pixel, and there might be a way to use this information to speed up the decision of whether to move the photon or not. But I don't think it impacts the decision of when to update the pixel shapes. Hope this makes sense.
My question for @rmjarvis is what other more recent GalSim functionality might be tricky with photon shooting? For example, can all the WCS stuff work with photon shooting?
Arbitrary WCS is trivial with photon shooting, so that's not a problem. I'm pretty sure it's already implemented properly, although it looks like I didn't make any unit tests of that. I should probably add something to check that.
All the chromatic stuff needs some thinking about how best to do that with photon shooting. That's issue #540, and I haven't thought very much about it yet. I suspect this effort here will probably springboard into that issue. There are some significant API design questions to consider in that regard about how best to do the wavelength sampling and such. It's not a trivial extension of the existing functionality. But for now, we should probably keep this issue achromatic (but with an eye toward the fact that we will eventually have chromatic photons).
Josh's new Fourier optics stuff can't be doing with photon shooting of course (that's kind of the point after all), but we can draw the image with FFTs and then photon shoot an interpolated image through the silicon.
The paper I started to look at says the photon shooting is sampling from surface brightness. But, is that after you do a analytic convulsion with one of the built in galaxy shapes?
Every component that is convolved together gets sampled separately. You start with a photon sampled from the galaxy (say). Then the photon shooting routine does the convolution by applying the right offset sampled from the PSF. If the PSF is modeled as multiple components convolved together (e.g. Kolmogorov + Airy) then they each add their own kick.
Verifying that we get equivalent results with both algorithms (fft and phot, and even real-space convolution in some cases) was one of the major ways that we validated the GalSim code. Shears and other transformations are also implemented differently in the two cases, so agreement there was also part of the validation.
This only breaks down if you need to go to fourier calculations to figure out the surface brightness profile of one of the components. For a simple Airy, you don't have too -- it's all analytic, even with an obscuration. But for an aberrated optical PSF, you need an fft. We currently implement OpticalPSF as an InterpolatedImage under the hood, so it can be photon shot. But computing the right profile requires an FFT to make that interpolated image object.
could we store a memory object which is the band-pass multiplied by the object SED so we can sample from it to get the proper chromatic response in the CCD on average?
I think for now, we can just supply a single wavelength value, maybe in the SiliconModel structure. This would at least let you get the main difference between bands by selecting something near the middle of the band in question.
Then later once we think more about the idea of chromatic photons, we can work up to the functionality of having the PhotonArray keep track of a separate wavelength for each photon.
BTW, would it be worth having a targeted telecon about this to do some brainstorming about the API design and such. Might be more efficient than GitHub comments...
BTW, would it be worth having a targeted telecon about this to do some brainstorming about the API design and such. Might be more efficient than GitHub comments...
Yes, definitely.
I agree with the telecon idea too!
On Thu, Mar 17, 2016 at 2:42 PM, Chris Walter notifications@github.com wrote:
Yes, definitely.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/GalSim-developers/GalSim/issues/722#issuecomment-198027179
Regards
Arun Kannawadi Jayaraman PhD candidate, Dept. of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
+1
Count me in on the telecon. Some time early next week?
Perhaps you could make a poll (doodle or whenisgood) to find a day+time combination?
I'm pretty available, so I just picked a range of times early next week. If none of these work for you, let me know and I can add more options.
These are Eastern time? (doesn't seem to have time zone support)
Yes, EDT. Sorry, I forgot to add time zone support when making it, and the option doesn't seem to exist when I go back to edit it. Weird.
I don't think this is quite right. The pixel shape depends on the total charge stored in the pixel and is independent of where in the pixel the photons landed. Arun's suggestion is a good one for deciding whether a photon stays in the undistorted pixel or moves to a neighboring pixel. If it lands near the center, then it is unlikely to move to a neighboring pixel, and there might be a way to use this information to speed up the decision of whether to move the photon or not. But I don't think it impacts the decision of when to update the pixel shapes. Hope this makes sense
I was not suggesting that the update itself depends on where the photons land - I was thinking that when we update the pixel boundaries can depend on the location of the photon. For each incoming photon, if it falls in the central region of a pixel, then irrespective of the boundary distortions, we can be confident that the photon is not going to contribute to the electron count in any other pixel. We do not have to update the pixel shape after we receive this photon. We can keep track of the number of photons we have received since our previous update. Once a photon falls dangerously close to the pixel boundary, say 5% of pixel size away from the edges of the undistorted square pixels (which happens with probability of ~20% and hence 5x speed up?), then we update the pixel shape boundary using the photon counts we have and then decide which pixel the latest photon falls into. I think this is valid way to reduce the number of calls to new subroutines.
Regards
Arun Kannawadi Jayaraman PhD candidate, Dept. of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
I was wrong about the numbers in my previous comment. I updated it on the Github. Approx. 5x speed up if we call the new subroutines if the photons fall 5% of pixel width away from the edges.
On Thu, Mar 17, 2016 at 6:34 PM, Arun Kannawadi arunkannawadi@cmu.edu wrote:
I don't think this is quite right. The pixel shape depends on the total
charge stored in the pixel and is independent of where in the pixel the photons landed. Arun's suggestion is a good one for deciding whether a photon stays in the undistorted pixel or moves to a neighboring pixel. If it lands near the center, then it is unlikely to move to a neighboring pixel, and there might be a way to use this information to speed up the decision of whether to move the photon or not. But I don't think it impacts the decision of when to update the pixel shapes. Hope this makes sense
I was not suggesting that the update itself depends on where the photons land - I was thinking that when we update the pixel boundaries can depend on the location of the photon. For each incoming photon, if it falls in the central region of a pixel, then irrespective of the boundary distortions, we can be confident that the photon is not going to contribute to the electron count in any other pixel. We do not have to update the pixel shape after we receive this photon. We can keep track of the number of photons we have received since our previous update. Once a photon falls dangerously close to the pixel boundary, say 10% of pixel size away from the edges of the undistorted square pixels (which happens with probability of ~20% and hence 5x speed up?), then we update the pixel shape boundary using the photon counts we have and then decide which pixel the latest photon falls into. I think this is valid way to reduce the number of calls to new subroutines.
Regards
Arun Kannawadi Jayaraman PhD candidate, Dept. of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
Regards
Arun Kannawadi Jayaraman PhD candidate, Dept. of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
I don't think this is quite right. The pixel shape depends on the total charge stored in the pixel and is independent of where in the pixel the photons landed. Arun's suggestion is a good one for deciding whether a photon stays in the undistorted pixel or moves to a neighboring pixel. If it lands near the center, then it is unlikely to move to a neighboring pixel, and there might be a way to use this information to speed up the decision of whether to move the photon or not. But I don't think it impacts the decision of when to update the pixel shapes. Hope this makes sense I was not suggesting that the update itself depends on where the photons land - I was thinking that when we update the pixel boundaries can depend on the location of the photon. For each incoming photon, if it falls in the central region of a pixel, then irrespective of the boundary distortions, we can be confident that the photon is not going to contribute to the electron count in any other pixel. We do not have to update the pixel shape after we receive this photon. We can keep track of the number of photons we have received since our previous update. Once a photon falls dangerously close to the pixel boundary, say 10% of pixel size away from the edges of the undistorted square pixels (which happens with probability of ~20% and hence 5x speed up?), then we update the pixel shape boundary using the photon counts we have and then decide which pixel the latest photon falls into. I think this is valid way to reduce the number of calls to new subroutines.
Arun - I think you are right that this would reduce the number of times we re-draw the pixel boundaries, but I think it is still re-drawing them way too frequently. With this scheme, on average, we would need to re-draw them every 5 electrons or so, but I think re-drawing them every 1000 electrons or so will be sufficient.
Sounds good. However, if we are going to let the user pick this number 1000 and say the user wants to be accurate in simulating the BF effect and hence chooses to update after ever pixel, then that would be an overkill. But I guess for all the applications, not updating for 100-1000 electrons is sufficient.
On Thu, Mar 17, 2016 at 6:53 PM, Craig Lage notifications@github.com wrote:
I don't think this is quite right. The pixel shape depends on the total charge stored in the pixel and is independent of where in the pixel the photons landed. Arun's suggestion is a good one for deciding whether a photon stays in the undistorted pixel or moves to a neighboring pixel. If it lands near the center, then it is unlikely to move to a neighboring pixel, and there might be a way to use this information to speed up the decision of whether to move the photon or not. But I don't think it impacts the decision of when to update the pixel shapes. Hope this makes sense I was not suggesting that the update itself depends on where the photons land - I was thinking that when we update the pixel boundaries can depend on the location of the photon. For each incoming photon, if it falls in the central region of a pixel, then irrespective of the boundary distortions, we can be confident that the photon is not going to contribute to the electron count in any other pixel. We do not have to update the pixel shape after we receive this photon. We can keep track of the number of photons we have received since our previous update. Once a photon falls dangerously close to the pixel boundary, say 10% of pixel size away from the edges of the undistorted square pixels (which happens with probability of ~20% and hence 5x speed up?), then we update the pixel shape boundary using the photon counts we have and then decide which pixel the latest photon falls into. I think this is valid way to reduce the number of calls to new subroutines.
Arun - I think you are right that this would reduce the number of times we re-draw the pixel boundaries, but I think it is still re-drawing them way too frequently. With this scheme, on average, we would need to re-draw them every 5 electrons or so, but I think re-drawing them every 1000 electrons or so will be sufficient.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/GalSim-developers/GalSim/issues/722#issuecomment-198114905
Regards
Arun Kannawadi Jayaraman PhD candidate, Dept. of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
Hi All,
Since everyone said it is OK can we choose Monday at 1:00pm EST?
I have some other people asking about other times now...
Sounds like a plan!
Sounds good. On Mar 18, 2016 5:34 PM, "Rachel Mandelbaum" notifications@github.com wrote:
Sounds like a plan!
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/GalSim-developers/GalSim/issues/722#issuecomment-198552988
Monday 1:00 sounds good, but I'm sure you mean EDT. I'll be there.
Thanks, Craig
Sorry yes EDT. OK see you then!
Hi All,
I sent an email to the people I knew who filled out the poll to attend. Let's use the DESC SSim bluejeans number. I don't want to type that in here so if you didn't fill out the poll and want to attend either find it on the DESC SSim page: https://confluence.slac.stanford.edu/display/LSSTDESC/Survey+Simulations or send me an email for the link.
At SLAC today, we (@cwwalter, @aaronroodman, @arasmssn) had a bit of brainstorming about the API for this feature with the aim to making it easy to add multiple different models for the sensor. @craiglagegit already has two models I guess (the full fidelity slow version and a faster one that makes some approximations), and we'd like to get Andy's version in here as well, so users can swap among the different options to compare. Presumably these would each be at a different point in speed/accuracy space.
I think the sensor code can be reduced down to taking a list of photons with (lambda, image position, photon direction) and an image on which to place the electrons. We already have the idea of a Silicon class (new on this branch) and PhotonArray (existing, but only x,y,flux currently).
I think the API for handling the sensor effects can be reduced to a function that takes a PhotonArray (to which we would want to add lambda, thetax, thetay) and the image and the rng. This function would then add the resulting electrons to the image using whatever sensor model we want to use.
Right now, on this branch, the function that does this is:
double PhotonArray::addTo(ImageView<T>& target, UniformDeviate ud, Silicon* silicon=NULL) const;
I propose changing this to be a function in the Silicon class:
double Silicon::addPhotons(const PhotonArray& phot_array, ImageView<T> target, UniformDeviate ud) const
Then different sensor models will be different classes (that derive from a base Sensor class perhaps) that all implement this function.
I volunteer to do the following work on this branch:
addPhotons
step in the current way where things are just added directly with no sensor physics.Then Craig and Andy can focus on just the step where the photons get converted into electrons and drift down through the sensor. Ideally implementing both models (slow and precise; faster with approximations).
PhotonArray::addTo
to the Silicon class as a addPhotons
function.Sensor
class that just defines the API for this function.Sound good?
Hi Mike,
On Fri, May 20, 2016 at 3:18 PM, Mike Jarvis notifications@github.com wrote:
Right now, on this branch, the function that does this is:
double PhotonArray::addTo(ImageView
& target, UniformDeviate ud, Silicon* silicon=NULL) const; I propose changing this to be a function in the Silicon class:
double Silicon::addPhotons(const PhotonArray& phot_array, ImageView
target, UniformDeviate ud) const Then different sensor models will be different classes (that derive from a base Sensor class perhaps) that all implement this function.
I think that makes more sense since other effects can then be included within the Sensor/Silicon class.
Also, what do you mean by thetas here? Are they the angles that the direction of incidence makes with the x and y axes of the CCDs?
Yes, Andy pointed out that you need to know the angle of incidence of the photons on the sensor to determine where in the silicon it converts to an electron. So there are two angles to define that.
Let's say the two angles with respect to the vertical axis I guess. So thetax = thetay = 0 would be incidence perpendicular to the CCD. This is what I would start with. A slightly more sophisticated model would be to have some opening angle, with the photons uniformly drawn from a disc with that opening angle (i.e. an incident cone shape). We can get even more sophisticated by having that cone know about the pupil plane obscuration and only populate ray paths that actually happen.
For now, I suspect we can just ignore this detail. But might as well set up the API properly so if/when we care about this detail, the information will be available to use.
I've sped up the test_sensor.py unittests to ~ 1 second if name == main. It's about 70 seconds otherwise, mostly in the routine to calculate B-F slopes. There are still two outstanding items that I'm aware of:
(1) I just put in a rough approximation of the silicon absorption length vs wavelength in Silicon.cpp. Eventually this needs a more accurate look-up table.
(2) The whole photon shooting could be sped up dramatically if we only calculated the pixel shapes every N photons, with N~1000. But this would require the PhotonArray class to store all of the distorted pixel shapes - on the order of 40-100 doubles for each pixel. Mike - is this feasible? If so, I could work on this.
The whole photon shooting could be sped up dramatically if we only calculated the pixel shapes every N photons, with N~1000. But this would require the PhotonArray class to store all of the distorted pixel shapes - on the order of 40-100 doubles for each pixel.
The PhotonArray should only store information about the photons. Not the sensor model.
But I think it should be straightforward to do this within the Silicon class. You could give the InsidePixel
function a boolean argument updateShapes
, which could be set to true only every 1000 photons (or technically 1000 total flux added, since each photon can have flux != 1). Then when updateShapes == False
, it can skip the bulk of the code and just jump to the end where it calls PointInside
.
But I think it should be straightforward to do this within the Silicon class. You could give the InsidePixel function a boolean argument updateShapes, which could be set to true only every 1000 photons (or technically 1000 total flux added, since each photon can have flux != 1). Then when updateShapes == False, it can skip the bulk of the code and just jump to the end where it calls PointInside.
No problem doing it this way. But with this plan, if I instantiate a new SiliconSensor instance from a PhotonArray instance with 1000x1000 pixels, and 32 vertices per edge, I will need to set aside 2GBytes of memory. Is there some memory usage limit I should test for before I create the new instance?
You could put the memory demands in the doc string of the SiliconSensor class in Python so the user is aware of them. But no, you don't need to explicitly check the memory. If it runs out of memory, it will raise an exception and let the user know why it failed. Then they can decide what they want to do about that.
If it becomes onerous to deal with this, we can reevaluate the memory considerations as a separate issue. For now, I think just focus on getting everything working with a robust set of tests. That makes it a lot easier to try out various optimizations later and be confident that you haven't broken anything.
If it becomes onerous to deal with this, we can reevaluate the memory considerations as a separate issue. For now, I think just focus on getting everything working with a robust set of tests. That makes it a lot easier to try out various optimizations later and be confident that you haven't broken anything.
OK, I'll work on doing it this way and we'll see how much it speeds up. Thanks.
Here is a figure Craig sent me by email showing some progress he made during the hack week. The slopes don't match the data perfectly, but qualitatively, everything seems to be working properly. Craig says the exact slopes are things that he knows how to tweak by fiddling with some of the numbers in the input Poisson solver.
Craig Lage at UC Davis has developed a Poisson equation solver that computes the distortion of the pixel edges as the electrons get collected in the CCD pixels. In the DESC Hack day today, we made some progress towards integrating his code (in C++) with the C++ layer of the GalSim. We identified that the natural way to include this functionality would be by tweaking the photon-shooting method inside the
drawImage
routine. I will push the code after some cleanup, but this is not completely done yet.