1. Key analyses goals - extraction of reducedData:
A. How much have we reduced the background? Background level(s) VS motor position. --> "Heatmap" of background level vs motor position.
EPICS parameters for motors.
CSPAD photon scores using psana's dark-subtraction + median-subtraction routines. For each 2x1 measure (6 values) photon average, median, StdDev, NumPix>Thresh0, NumPix>Thresh1, NumPix>Thresh2.
B1. Level of noise and sufficiency for SPI-hitfinding (perhaps studied in offline mode):
Convert patterns into average photon pattern so we can relate to SPI scattering strength.
Loss of resolution of an equivalent powder pattern given measured background. Measuring the fluctuation of the background pattern (e.g. std. deviation) informs us how well we can detect signal photons that appear over the background.
Which "regions" on the detector can we use for hit-finding?
Simulated hit-finding -- what is the smallest particle that we can detect (with xxx confidence) given the current measured background?
Detection of spurious outlier patterns.
B2. What are the key photon features in the pattern? Persistent average pattern of (-periodic writes to single file?):
ADUs that are below Thresh0.
ADUs between Thresh0 to Thresh1.
ADUs between Thresh1 to Thresh2.
ADUs above Thresh2.
Strong background/detector artifacts.
Identify dead pixels on the fly. I think py-psana's darkcalibration kinda does this to some extent.
For the future: adaptive thresholding?
C. Setting up a framework that will become useful for hit-finding in June.
How to integrate structure-aware hit-finding?
How to integrate simulation modules into data stream (e.g. sizing, sphericity etc).
We can discuss this in detail over time.
2. Py-psana framework and backbone-code for accessing DAQ datastream.
A. Should the backbone code be:
like Cheetah, where a main thread passes events to worker threads, or
have independent worker threads that separately access the datastream and write to its own time-tagged LogFile?
B. How similar should online and offline analyses codes be similar?
Are psana library and its dependencies portable across different machines?
C. Paired programming model:
"one person" works on data-reduction code, "another person" work on data-analyses. This way, we don't have to re-access XTCs just to rerun/debug analyses on reduced data.
3. Cheetah’s role:
A. Output pixel-wise Histograming.
Pixel-wise average and standard deviation.
B. Convert statistical outlier patterns to CXIDB format.
For March-2015 beamtime this would be useful to identify streaks in background (correlate with motor positions?) which might be lost if we just stared at histograms.
C. Photon-based hitfinding using running background subtraction?
Help find surprises in the background (e.g. streaking, or that we hit a fixed target!)
4. Feature request for AMI:
A. Histograms of ADU counts for certain pixels vs EPICS parameter. The cxiopr could use this for fast feedback for optimal motor positioning.
5. Offline (or quick-offline) analyses:
A. Pixel-by-pixel pedestal + gain calibration off Cheetah output?
B. Probability mass in one-photon peak vs that in zero-photon peak.
C. False positive hit rates given pixel-by-pixel histogram statistics and simulated scattering pattern.
6. Additional questions:
A. How many processing streams should we have during online analysis?
B. How are dark-cal computed in py-psana?
C. Matching simulated diffraction patterns? (e.g. diffraction pattern of fabricated shape)
D. Test data streams for us to play with before March beamtime.
E. Who will be going to the beamtime?
F. How should this document be shared with the Initiative?
Meeting Agenda for Single Particle Imaging
1. Key analyses goals - extraction of reducedData:
A. How much have we reduced the background? Background level(s) VS motor position. --> "Heatmap" of background level vs motor position.
B1. Level of noise and sufficiency for SPI-hitfinding (perhaps studied in offline mode):
B2. What are the key photon features in the pattern? Persistent average pattern of (-periodic writes to single file?):
C. Setting up a framework that will become useful for hit-finding in June.
2. Py-psana framework and backbone-code for accessing DAQ datastream.
A. Should the backbone code be:
B. How similar should online and offline analyses codes be similar?
C. Paired programming model:
3. Cheetah’s role:
A. Output pixel-wise Histograming.
B. Convert statistical outlier patterns to CXIDB format.
C. Photon-based hitfinding using running background subtraction?
4. Feature request for AMI:
A. Histograms of ADU counts for certain pixels vs EPICS parameter. The cxiopr could use this for fast feedback for optimal motor positioning.
5. Offline (or quick-offline) analyses:
A. Pixel-by-pixel pedestal + gain calibration off Cheetah output?
B. Probability mass in one-photon peak vs that in zero-photon peak.
C. False positive hit rates given pixel-by-pixel histogram statistics and simulated scattering pattern.
6. Additional questions:
A. How many processing streams should we have during online analysis?
B. How are dark-cal computed in py-psana?
C. Matching simulated diffraction patterns? (e.g. diffraction pattern of fabricated shape)
D. Test data streams for us to play with before March beamtime.
E. Who will be going to the beamtime?
F. How should this document be shared with the Initiative?