roman-corgi / corgidrp

Data Reduction Pipeline for the Roman Coronagraph Instrument
BSD 3-Clause "New" or "Revised" License
4 stars 4 forks source link

Develop Charge Transfer Inefficiency measurement and correction function(s) #20

Closed maxwellmb closed 3 months ago

maxwellmb commented 8 months ago

Charge traps can trap electrons as they get transferred to the CCD read-out register, resulting in vertical smearing. An algorithm to calibrate this is this paper that Rob Zellem's calibration paper references.

maxwellmb commented 8 months ago

@nasavbailey is this the new desmearing feature that is being worked on for II&T data that you had previously mentioned?

nasavbailey commented 8 months ago

CTI is different from desmear (which will be in next release). Desmear is to correct for the fact that we have no shutter on EXCAM, so light continues to fall on the array as the image is clocked out (0.264sec for SCI-sized images).

Note: Some people talk about Charge Transfer Efficiency (CTE) instead of CTI. CTI = 1 - CTE.

Kevin Ludwick authored a module in the II&T data pipeline for "trap pumping" data analysis; it produces trap properties, following the Bush+2021 paper mentioned above. So the measurement part is done & will just need to be ported over.

As far as correction: I would say CTI correction should be attacked in two stages: first, a basic module that meets the systematic error allocation we've set (2% flux loss due to CTI) by applying a global correction. Later, we may do R&D for a more sophisticated approach that fully utilizes the trap parameter information and produces a trap-by-trap correction in both analog and photon-counted images. That would certainly be an interesting research project, but preliminary work (see below) suggests we don't need a sophisticated approach to meet our error allocation.

Bijan Nemati & Guillermo Gonzales did a preliminary analysis a couple years ago (posted on IPAC) suggesting that a simple global model can achieve adequate correction during CGI's nominal lifetime (launch + 21mo), because our detectors are fairly rad-hard. This should be revisited for both bright and photon-counted sources, though, to decide on the approach for the 1st delivery.

HST CTI methods are probably also a good resource link

maxwellmb commented 8 months ago

Other people have thought about this too. Here's an existing python package: https://pyautocti.readthedocs.io/en/latest/index.html

maxwellmb commented 6 months ago

v1 of this code can do nothing, but we should have a function.

maxwellmb commented 6 months ago

Sets the CTI_CORR header keyword to True if CTI is actually corrected.

maxwellmb commented 3 months ago

Removing the R3.0.1. milestone for this, as (from the Software delivery schedule) this can just be a no-op code.

nasavbailey commented 3 months ago

No-op yes, but I think it needs to have the correct input and output types, and it needs to be called by the walker, to demonstrate that the walker performs each step required?

I think the inputs are the data frame and the trap parameters and the output is the "corrected" data frame with a history entry. Since we don't know how to do the correction, the output data frame is the same as the input data frame. Anything I'm missing?

maxwellmb commented 3 months ago

Alright, so it sounds like what you want is for us to develop some kind of PumpTrapCalibration calibration filetype that we would pass into this function, so that we can test that interface, but at the moment it doesn't need to do anything other than return the input dataset. That shouldn't really be a problem.

Reading the FDD, it looks like each pixel that is identified by the pump trapping algorithm will have an x,y position, capture cross sections, release time constants and species. So we can implement it as an [n, 5] array. By its nature I think we'll have to allow its file size to vary, right?

Who is developing the algorithm to actually make this calibration file? I can't seem to track that down in the FDD or software delivery schedule.

nasavbailey commented 3 months ago

@kjl0025 wrote the trap analysis code so could speak to the format of the outputs? As far as I can tell it only lives in https://github.com/roman-corgi/cgi_iit_drp/tree/main/Calibration_NTR/cal/tpumpanalysis. You're right, I don't see it listed on the delivery schedule as something that needs to be ported - looks like that was an ommission.

kjl0025 commented 3 months ago

My preference for an input would be a dictionary because that is the current form of the main output of the trap-pump analysis code. I'm referring to the output called trap_dict from the tpump_analysis() function in tpump_analysis.py.

maxwellmb commented 3 months ago

In order to work with the current Calibration Database architecture I think we'd have to reformat into a fits file, which is why I was trying to look at it like an array. But if the data is just as I described it above (5 numerical parameters per trap), it shouldn't be hard for us to reformat the existing dictionary, right?

kjl0025 commented 3 months ago

Sure, we could recode the string parts of that dictionary specifying the sub-electrode location for a given pixel to a number code. An example of such a string is 'RHSel2', which denotes the right-hand side of electrode 2. It can be 'RHS', 'LHS', or 'CEN' for electrodes 1 through 4. So that can be converted to a number code (1: 'LHS', 2: 'CEN', 3: 'RHS', so that 'RHSel2' would be 32). There are a few extra things that are returned in trap_dict which may be useful in doing CTI correction with arCTIc, so I would say the input could be an [n, 10] array (row, col, sub-electrode location, number of trap in that sub-electrode location since there could be more than one, capture time constant, max amp of the dipole, energy level of hole, cross section for holes, R^2 value of fit for the dipole (to see if correcting for that trap is worthwhile), and release time constant) for the thorough version of the correction. And that could be adjusted as needed if the thorough version of the CTI correction function gets written.

maxwellmb commented 3 months ago

FYI, got a good start on this!

maxwellmb commented 3 months ago

If anyone is curious, you can see at the cti_correction branch. There are some architecture choices that we'll have to make, but maybe a discussion for next call. At least a discussion for after SPIE.

maxwellmb commented 3 months ago

Picking up on @nasavbailey comments from Issue #68. Since you're working on making this a "should", does that mean we should de-prioritize it for now/September?

nasavbailey commented 3 months ago

Yes, assuming our shall->should change request is approved (might take a couple weeks), CTI correction could be delayed if there's not enough time to get it into R3.0.1. Sorry, I know that's a change from what we said before. Marie & I got better advice from people who know more about JPL software delivery processes and learned that no-ops are a no-no for "shalls" and that the way to properly track not-required features is to call them "shoulds." No-ops are OK for "shoulds," so if there's time to write the interface for CTI correction that just reads in trap params for R3.0.1, that's fine and dandy.

But porting over the code to do the trap pumping data analysis to produce the parameters DOES need to stay a "shall" and does need to be in R3.0.1. That's because that analysis code was already written and delivered in the JPL repo and the requirement was already checked off. So we have to preserve that functionality here by porting that code into DRP. Not sure if that ought to be tracked as a separate issue?

maxwellmb commented 3 months ago

Closing out this issue, as its been split into Issue #111 and Issue #112.