LSSTDESC / Twinkles

10 years. 6 filters. 1 tiny patch of sky. Thousands of time-variable cosmological distance probes.
MIT License
13 stars 12 forks source link

Understand the flux dropouts in some of the Run 1.1 light curves #297

Closed jchiang87 closed 7 years ago

jchiang87 commented 7 years ago

For a number of objects in our Run 1.1 Level 2 results, we are seeing light curves that look like, e.g., objectId=40514 in the u band: objectid_40514_cropped The PhoSim centroid files don't show this type of variability for these sources on the input side, so these are artifacts coming out of the Level 2 pipeline. In order to understand this, I extracted some fluxes for the visit indicated by the red dot and for the visit immediately preceding it in our data. Here are some results:

  visit   PsfFlux (DN, calib)    Ap_3_0 (DN, calib)   Ap_4_5  (DN, calib)    Ap_9_0 (DN, calib)
 703312  3651.2683   1.30e-05  3048.4722   1.09e-05  3392.5642   1.21e-05  3315.2795   1.18e-05  
 704236   227.4048   7.63e-07  2629.8508   8.83e-06  3296.7612   1.11e-05  3802.4734   1.28e-05  

The pairs of columns after the visit column are the DN and calibrated flux values for the base_PsfFlux_flux, which we are using in the Level 2 tables, and the fluxes for circular apertures of size 3.0, 4.5, and 9.0 pixels (base_CircularAperture_n_m_flux in the forced_src schema). The cut-out images for these two visits don't look significantly different, though the brightest pixel is different for each, perhaps indicating a displacement of the image on the focal plane.

At our Aug 25 weekly meeting, I showed these results. @drphilmarshall and @rbiswas4 suggested looking at the model PSF size, shape, and centroid position, as well as filter and airmass, suspecting that this could be from differential chromatic refraction (DCR) since the effect is most prominent in the u band.

Some questions:

sethdigel commented 7 years ago

In case it matters, here are some OpSim parameters for the two visits. vskybright was essentially the same for both (~21.4 magnitudes/sq arcsec)

visit altitude (deg) airmass rawseeing (arcsec) moonalt (deg)
703312 48.6 1.33 0.33 -18.9
704236 65.5 1.10 0.56 -0.6
drphilmarshall commented 7 years ago

Hmm: so the problem visit had lower airmass. That would argue against this being a DCR effect... Do these OpSim values agree with the values in the image headers? (They had better!)

jchiang87 commented 7 years ago

Unless I am looking at different files vis-a-vis Run1.1, the airmass has the same values for both (possibly all) visits:

[Run1.1] fitshdr 000318/output/lsst_e_703312_f0_R22_S11_E000.fits.gz | grep AIRMASS
AIRMASS =     1.00015190967402 / Airmass                                        
[Run1.1] fitshdr 000319/output/lsst_e_704236_f0_R22_S11_E000.fits.gz | grep AIRMASS
AIRMASS =     1.00015190967402 / Airmass
sethdigel commented 7 years ago

Interesting. Yes, it looks like all of the eimages from the task Twinkles-phoSimII have the same AIRMASS and ZENITH keyword values in their headers, with ZENITH = 1.

In the more-recent Twinkles-phoSim-352 run, for the two visits in question, the AIRMASS and ZENITH keywords correspond to the table above (with ZENITH = 90 - ALTITUDE).

drphilmarshall commented 7 years ago

A bug! Are we not pointing our simulated telescope correctly? Any idea what is going on at the OpSim PhoSim interface, @danielsf @TomGlanzman?

On Thu, Aug 25, 2016 at 4:49 PM, Seth Digel notifications@github.com wrote:

Interesting. Yes, it looks like all of the eimages from the task Twinkles-phoSimII have the same AIRMASS and ZENITH keyword values in their headers, with ZENITH = 1.

In the more-recent Twinkles-phoSim-352 run, for the two visits in question, the AIRMASS and ZENITH keywords correspond to the table above (with ZENITH = 90 - ALTITUDE).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-242581638, or mute the thread https://github.com/notifications/unsubscribe-auth/AArY9yP2yc7yXmM9_hhsIRhXG-LdsDM_ks5qjioTgaJpZM4Jti9k .

sethdigel commented 7 years ago

Good question. I looked at the instance catalog for the first visit of Twinkles-phosimII. It has all of the correct operational parameters specified, including 'Opsim_altitude 70.8431374'. I do not see any way to override this specification, and in the log file I do not see any override attempted. Still the log file reports 'Zenith Angle (degrees): 1.000000'. I think that Tom used exactly the same configuration and instance catalogs for his Twinkles-phoSim-352 runs.

drphilmarshall commented 7 years ago

Hey @johnrpeterson, come and look at this: our instance catalogs contain Opsim_altitude values that look sensible, but our PhoSim logfiles look strange. What does 'Zenith Angle (degrees): 1.000000' mean to you? And how about, in the FITS headers, AIRMASS = 1.00015190967402?

You can see the back story in the thread above: we thought we might be seeing some DCR effects, but are now confused about which airmass we actually received photons from...

rbiswas4 commented 7 years ago

Looking for airmass seems to have inadvertently uncovered what looks like a bigger problem!

@drphilmarshall Not quite the point anymore, but useful for my understanding: I did not follow why having a lower airmass should argue against DCR. My expectation was that the position at which the forced photometry is performed is at something like the mean position of the observations over all times (since they were detected on a coadd). If the airmass is different from some kind of central airmass, then the source would move due to DCR, and therefore the forced photometry might pick up less flux.

About the fits headers: So, the instance catalogs already had the stars relevant for the particular telescope pointing, and would then be simulated correctly. If the fits headers were written incorrectly, would the DM calibration step not look at the erroneous header, pick a bunch of incorrect bright stars in a different part of the sky and do astrometry and photometric calibration based on that? If so, is it surprising that we get decent fluxes anywhere ? Maybe this is a question to ask @SimonKrughoff ?

drphilmarshall commented 7 years ago

Yes, I think you're right: its the airmass being different from some average value over all the visits that matters. Perhaps we should plot residual (relative to the mean) PSFflux against residual (relative to the mean) airmass, to see if the outliers in flux are outliers in airmass?

On Thu, Aug 25, 2016 at 5:29 PM, rbiswas4 notifications@github.com wrote:

Looking for airmass seems to have inadvertently uncovered what looks like a bigger problem!

@drphilmarshall https://github.com/drphilmarshall Not quite the point anymore, but useful for my understanding: I did not follow why having a lower airmass should argue against DCR. My expectation was that the position at which the forced photometry is performed is at something like the mean position of the observations over all times (since they were detected on a coadd). If the airmass is different from some kind of central airmass, then the source would move due to DCR, and therefore the forced photometry might pick up less flux.

About the fits headers: So, the instance catalogs already had the stars relevant for the particular telescope pointing, and would then be simulated correctly. If the fits headers were written incorrectly, would the DM calibration step not look at the erroneous header, pick a bunch of incorrect bright stars in a different part of the sky and do astrometry and photometric calibration based on that? If so, is it surprising that we get decent fluxes anywhere ? Maybe this is a question to ask @SimonKrughoff https://github.com/SimonKrughoff ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-242587308, or mute the thread https://github.com/notifications/unsubscribe-auth/AArY9wlABTBYciWBioeYfufgtGkWeraPks5qjjNXgaJpZM4Jti9k .

jchiang87 commented 7 years ago

Regarding the value of zenith=1 in the phosim 3.4.2 runs, that can be traced back to the value in the default instance catalog here and the fact that the 3.4.2 phosim.py script will only look for Unrefracted_Altitude in the instance catalog to override the default value. In 3.5.2, phosim.py looks for any string with [Aa]ltitude in it (see the code here). Our instance catalogs give the altitude using the keyword Opsim_altitude, hence the error in the 3.4.2 runs and difference wrt the 3.5.2 runs.

Of course, this still leaves the origin of the anomalously low fluxes for some visits unresolved.

jchiang87 commented 7 years ago

btw, the airmass value that's written to the FITS header is computed here.

rbiswas4 commented 7 years ago

Ok @jchiang87. So, is it correct to say that this bug is now understood? Does this mean we should write a test comparing phosim fits headers to obsmetadata parameters to ensure we don't get stung by a similar bug in the future? Or does phosim output a better ascii / log file that should be used for this purpose?

jchiang87 commented 7 years ago

@rbiswas4 I think this bug is understood, and I agree with your suggestion to check that the parameters we think we are feeding to phosim are actually the ones it uses. I've been bitten by phosim's fuzzy parameter handling and silently used default values in the past, so I am inclined to perform as rigorous checks as we can, and do them for every single execution. However, I don't have a good suggestion for implementing such checks. I think we only have the FITS headers and what phosim writes to stdout to work with. Given that, I think we should cross-check against all three: our inputs, the headers, and stdout.

rbiswas4 commented 7 years ago

@jchiang87 Thanks for the clarification.

In terms of tests, I know phosim does not take all of the Opsim values, and find it hard to keep straight in my head which values are supposed to be used. I think @danielsf and @simonkrughoff know this a lot better and can hopefully suggest what tests we should have.

sethdigel commented 7 years ago

In the PhoSim Reference Document Table 8.1 lists the 'operational data inputs' that can be specified in the instance catalog. I see that the altitude should have been specified with the Unrefracted_Altitude parameter. The Opsim_altitude parameter is (or was) not actually recognized. This is consistent with what Jim pointed out above regarding the phosim.py script for v3.4.2. In the log files (e.g., this one) phosim does not seem to have complained, but instead just ignored the parameter.

Would it be interesting to check whether the u band dropouts that Jim shows above are also present in the Twinkles-phoSim-352 visits? I'd guess that they would be. Anyway, for the original Twinkles visits, with the zenith angle = 1 deg, DCR should have been negligible.

johnrpeterson commented 7 years ago

yes, it looks like you were always simulating at 89 degrees altitude for 3.4.X runs, which is the default.

as you’ve found 3.5 is a little more forgiving about the naming than 3.4, so it did what you intended there but not earlier. i don’t know though why you have “opsim” in the altitude keyword though, because that was never in the phosim interface. so that is a bug in the catalog creation.

BTW, in the newer versions, we are putting in checks to abort phosim for any unrecognized non-comment command. philosophically we often were not that harsh about parsing commands in the catalogs in the past because the catalog generation took a very long time and there was pressure on phosim just to accept improper inputs to avoid redoing entire runs of catalogs, but now i think we can be more strict.

john

On Aug 26, 2016, at 2:03 AM, Seth Digel notifications@github.com<mailto:notifications@github.com> wrote:

In the PhoSim Reference Documenthttp://basov.physics.purdue.edu/lsst_sims/phosim_peterson_080213.pdf Table 8.1 lists the 'operational data inputs' that can be specified in the instance catalog. I see that the altitude should have been specified with the Unrefracted_Altitude parameter. The Opsim_altitude parameter is (or was) not actually recognized. This is consistent with what Jim pointed out above regarding the phosim.py script for v3.4.2. In the log files (e.g., this onehttp://srs.slac.stanford.edu/Pipeline-II/exp/LSST-DESC/log.jsp?pi=40602531) phosim does not seem to have complained, but instead just ignored the parameter.

Would it be interesting to check whether the u band dropouts that Jim shows above are also present in the Twinkles-phoSim-352 visits? I'd guess that they would be. Anyway, for the original Twinkles visits, with the zenith angle = 1 deg, DCR should have been negligible.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-242637934, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AJbT8urokzKyFwB4ElLV1tLIiNYDhPK7ks5qjoG5gaJpZM4Jti9k.

johnrpeterson commented 7 years ago

also, i’ve updated the phosim interface documentation here:

https://bitbucket.org/phosim/phosim_release/wiki/Instance%20Catalog

i noticed the documentation was old and it was basically the more wordy commands of v3.4, rather than the streamlined command names v3.5. note that though the old commands still work (if you did them properly), as it is a fully backward compatible interface.

john

On Aug 26, 2016, at 11:16 AM, Peterson, John R peters11@purdue.edu wrote:

yes, it looks like you were always simulating at 89 degrees altitude for 3.4.X runs, which is the default.

as you’ve found 3.5 is a little more forgiving about the naming than 3.4, so it did what you intended there but not earlier. i don’t know though why you have “opsim” in the altitude keyword though, because that was never in the phosim interface. so that is a bug in the catalog creation.

BTW, in the newer versions, we are putting in checks to abort phosim for any unrecognized non-comment command. philosophically we often were not that harsh about parsing commands in the catalogs in the past because the catalog generation took a very long time and there was pressure on phosim just to accept improper inputs to avoid redoing entire runs of catalogs, but now i think we can be more strict.

john

On Aug 26, 2016, at 2:03 AM, Seth Digel notifications@github.com<mailto:notifications@github.com> wrote:

In the PhoSim Reference Documenthttp://basov.physics.purdue.edu/lsst_sims/phosim_peterson_080213.pdf Table 8.1 lists the 'operational data inputs' that can be specified in the instance catalog. I see that the altitude should have been specified with the Unrefracted_Altitude parameter. The Opsim_altitude parameter is (or was) not actually recognized. This is consistent with what Jim pointed out above regarding the phosim.py script for v3.4.2. In the log files (e.g., this onehttp://srs.slac.stanford.edu/Pipeline-II/exp/LSST-DESC/log.jsp?pi=40602531) phosim does not seem to have complained, but instead just ignored the parameter.

Would it be interesting to check whether the u band dropouts that Jim shows above are also present in the Twinkles-phoSim-352 visits? I'd guess that they would be. Anyway, for the original Twinkles visits, with the zenith angle = 1 deg, DCR should have been negligible.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-242637934, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AJbT8urokzKyFwB4ElLV1tLIiNYDhPK7ks5qjoG5gaJpZM4Jti9k.

danielsf commented 7 years ago

I can update the CatSim catalog-creation code to accommodate the new parameter interface for PhoSim. Are we comfortable cloning one of the lsst simulations packages and running off of a branch, or will I need to issue a new version once the changes get merged?

jchiang87 commented 7 years ago

Since we aren't yet running the instance catalog generation code in the workflow engine, I think running off a branch from a cloned repo would be fine. I guess for Run3, we were planning on UW (i.e., @rbiswas4 @jbkalmbach et al) to generate the instance catalogs anyways, so in the near term, it's up to you guys.

danielsf commented 7 years ago

@johnrpeterson Just to make sure I am clear: parameters like "Unrefracted altitude" and "Unrefracted right ascension" are the altitude/right ascension you would observe if the Earth had no atmosphere, but all other effects due to the precession, nutation, etc. of the Earth were still included, right?

johnrpeterson commented 7 years ago

yes.

john

On Aug 26, 2016, at 12:51 PM, danielsf notifications@github.com<mailto:notifications@github.com> wrote:

@johnrpetersonhttps://github.com/johnrpeterson Just to make sure I am clear: parameters like "Unrefracted altitude" and "Unrefracted right ascension" are the altitude/right ascension you would observe if the Earth had no atmosphere, but all other effects due to the precession, nutation, etc. of the Earth were still included, right?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-242789046, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AJbT8n1j5P9DAYlkTqJgP2zT4gm0cF4Kks5qjxl1gaJpZM4Jti9k.

jchiang87 commented 7 years ago

The dropouts for objectId=40514 occur for visits where the seeing is at its lowest values, around ~0.2 arcsec: 40514_psflux_vs_seeing There appears to be a threshold effect for basically constant sources with moderate fluxes like this one. Looking at the brightest sources in a given band, there is a very strong correlation of the psFlux with seeing. Here are a couple objects which have the brightest mean u band flux in Run1.1: 9525_u_psflux_vs_seeing

11533_u_psflux_vs_seeing

A similar correlation in r and y:

7091_r_psflux_vs_seeing

8086_y_psflux_vs_seeing

Clearly, flux variability from seeing variation is imprinted on all of the Run1.1 light curves, which seems to indicate that base_PsfFlux_flux is not a very good estimator to use.

rbiswas4 commented 7 years ago

Some of this is consistent with our initial hypothesis of the object moving around, but is more likely due to a bug either in the pipeline or in our interpretation of it.

If the apparent position of the object was changing (due to refraction for example, or something else) from the mean position obtained from the coadd, then for the measurements with large FWHM compared to the size of this change in position, it should have a small effect resulting in the flux measured being a little lower, but for measurements with FWHM ~< change in position the flux will be a lot lower. Since 0.2 arcsec is a pixel size, it is the point where smallest recorded deviations start getting dropouts.

However, if it is a bug, I am guessing it is unlikely to be there in other flux measurements like an aperture flux (though aperture flux will also be affected if the underlying cause is a change in position). Do we have those on pServ ?

jchiang87 commented 7 years ago

We don't have the aperture fluxes on pserv since they are not explicitly in the baseline db schemas. However, we can certainly add them. There are 10 circular aperture sizes in the forced source catalogs: 3, 4.5, 6, 9, 12, 17, 25, 35, 50, 70. I'm assuming we don't need all of them (since that would blow up the table volume roughly by a factor of the number of additional flux estimators we choose). Maybe just 4 more: 3, 6, 12, 25? There are also entries that look like they are somehow aperture corrected which look interesting: base_PsfFlux_apCorr, base_GaussianFlux_apCorr.

SimonKrughoff commented 7 years ago

Sorry to take so long to weigh in on this.

It could very well be an aperture correction that is causing the trend with flux. I believe we correct the PSF flux to the aperture flux. I need to check on that.

Another thing to consider here is that if the seeing is actually 0.2 arcsec, that is undersampled on the LSST ccds (0.2 arcsec/pix). We don't claim to do well in the undersampled regime. I don't know what failures in flux calculation would look like, but I'm not surprised that we are seeing an effect. That being said, I don't think we expect the seeing to be that low very often (a few percent of the time). Where is the seeing value coming from? Is it from OpSim or from the measurement by the DM pipelines?

jchiang87 commented 7 years ago

It's coming from calexp.

SimonKrughoff commented 7 years ago

In that case, I'm pretty surprised we are getting seeing that small as often as we are. Maybe this is another issue with distributions in phosim being too wide.

sethdigel commented 7 years ago

Here is the distribution of rawseeing for the Twinkles Run 1 visits. This is an input parameter in the instance catalogs for the phosim runs - I am not sure how the values were generated. The parameter is said to represent the seeing at 500 nm (i.e., somewhere in the g band) at the zenith.

rawseeing_dist

The spikes look at least marginally significant, but overall the distribution looks similar to the measured values, although without the outlier group near 0.2". The seeing in any particular band is going to be different, of course. It is supposed to improve slowly with increasing wavelength. If I get up to speed with pserv, I think it would be interesting to compare the measured seeing (is it just one value per visit?) with the inputs.

wmwv commented 7 years ago

I would be suspicious of the reported 0.2" seeing values in the calexp.

I would suggest identifying 3 images with such reported values. Then look at the images and the profiles of stars to see if they are really 0.2".

wmwv commented 7 years ago

@jchiang87 What calibration has been applied to the base_PsfFlux_flux values?

By default those are un-calibrated measurements. To calibrate you would use something like

import lsst.afw.image as afwImage
import lsst.daf.persistence as dafPersist

repo = 'path_of_my_repo'
butler = dafPersist.Butler(repo)

data = {'visit': 12345}
cat = butler.get('src', dataId)
calexpMetadata = butler.get("calexp_md", vId, immediate=True)
calib = afwImage.Calib(calexpMetadata)
calib.getMagnitude(cat['base_PsfFlux_flux'], cat['base_PsfFlux_fluxSigma'])
jchiang87 commented 7 years ago

@wmwv I had thought I was doing this (originally from Simon, and which gives the same results as your code), but looking at the code to load the db tables from the forced source catalogs, I see that this calibration isn't being applied. I was planning to reload these tables anyways to accommodate the new schemas, so I'll fix that and report back once the tables have been refilled.

jchiang87 commented 7 years ago

btw, in the initial comment in this issue, I do apply that the calexp stuff as I described to the tabulated values for the two visits I compare, so the dropouts are still real, despite the un-calibrated db table values.

rbiswas4 commented 7 years ago

@sethdigel I don't know if it is the rawSeeing or the FWHMeff that is more relevant. What I am getting from the selected visits is the following, and it does not seem like we should get a number of visits at 0.2. Slightly larger, perhaps if it is really the rawSeeing, but none if it is really is more like FWHMeff:

kraken_dists

sethdigel commented 7 years ago

Thanks, @rbiswas4. The Opsim_rawseeing parameter (now just called 'seeing' in PhoSim 3.5) is the seeing-related input parameter for phosim. I see here that the rawSeeing parameter value comes from OpSim, and that OpSim also evaluates FWHMeff and FWHMgeom. The provided descriptions do not say, but I'd guess that they are weighted somehow over the band of the filter used for the visit. The descriptions say that FWHMgeom is 'the actual width at half the maximum brightness', so yes, probably FWHMgeom should be most directly comparable to the measured seeing in the phosim results. And yes, the outliers near 0.2" for u band (corresponding to the flux drop outs) look suspicious.

jchiang87 commented 7 years ago

The psFlux values in the ForcedSource tables are the base_PsfFlux_flux values divided by the zero point from calexp, i.e., fluxMag0, so fluxes in these plots had been calibrated already.

jchiang87 commented 7 years ago

flux_vs_seeing_40514_u flux_vs_seeing_11533_u

SimonKrughoff commented 7 years ago

So strange. That's huge. I didn't appreciate it was a factor of six. Does the measured seeing correlate with the input seeing at all? Also, is there anything funny about these particular objects? I.e. can I see a postage stamp?

jchiang87 commented 7 years ago

The brighter one has a non-zero extendedness value in the merged coadd catalog (the fainter one has extendedness zero). Looking at its postage stamp movie, the residuals of its saturation wings look like they are moving around with field rotation angle. I've been busy with getting qserv going today, but I'll post some cutouts for this stuff when I get a chance. I haven't had time to compare the input seeing with the measured value yet. Is there a way to get a list of the stars the DM code has used for determining the PSF/seeing?

SimonKrughoff commented 7 years ago

There are ways of getting at the characterization sources, but I'd have to look into how to do that.

I'm wondering if those sources are saturated. That may play into this.

jchiang87 commented 7 years ago

fwiw, here is the cutout for the u band coadd for object 11533: 11533_coadd and the first ten frames of the light curve movie showing the rotating saturation wings: objectid_11533

jchiang87 commented 7 years ago

seeing_vs_rawseeing_u

jchiang87 commented 7 years ago

Here are some plots of the psf profile for different visits depicted in the preceding comment. In each frame, I'm plotting the normalized counts of the pixels in a 3x3 arcmin cutout centered on the object position versus angular separation of each pixel center from that object position. The normalization factor is simply the sum of counts in that cutout. The 30 highest flux point sources (extendedness=0 in the Object catalog) in that frame are plotted and identified in the legend inset by objectId. (Some are omitted because of nan pixel values in the cutout.)

The visits that have measured seeing values correlating well with the input rawSeeing tend to look like this: psf_profile_v505374-fu (The vertical dashed line is the measured seeing.) Almost all of the ones with anomalously small seeing have a number of point sources with lots of negative pixels around a bright central pixel or two: psf_profile_v704236-fu However, there is at least one exception: psf_profile_v488756-fu

Note that these profiles are extracted from the warped tempExp images, and all of the visits I looked at are in the u band.

jchiang87 commented 7 years ago

Using v12_0 of the Stack, I was able to reproduce the anomalously small seeing estimate of 0.20 arcsec (from calexp) for visit 704236. I noticed that cosmic ray repair is turned off by default in processEimage.py for some reason. I enabled it with

--config charImage.repair.doCosmicRay=True

and reran processEimage.py on that visit. The seeing is now reported to be 0.78 arcsec, compared with the value from kraken_1042 of rawSeeing=0.56; this is largely consistent with the main trend line in https://github.com/DarkEnergyScienceCollaboration/Twinkles/issues/297#issuecomment-243481053. So, we should either disable cosmic ray generation in the phosim runs or enable the cosmic ray repair in processEimage.py. I note that there is a relevant, though seemingly incorrect, remark in the phosim physics override file that we used regarding cosmic rays.

IMO, this issue exposes two egregious pilot errors that should have been anticipated.

jchiang87 commented 7 years ago

Here's a plot of seeing computed with the cosmic-ray repair turned on vs the seeing with it turned off for the Run1.1 data. run1 1_seeing_comparison Unfortunately, with the cosmic ray repair turned on, processEimage.py ended in error for 33 visits out of the 1125 with the message:

lsst::pex::exceptions::LengthError: 'Too many CR pixels (max 10000)'