Open slactjohnson opened 1 year ago
I think this is pretty low hanging fruit, so I would say this is the first issue we would want to tackle.
dwell time is the amount of time to pause at each stage position to collect data ... It would also be nice to ignore the first spectrum at each stage position, in case it was partially collected while the stage was moving or at the previous stage position.
This is actually handled as dwell time in the existing codebase. It serves two purposes here:
It would be great to nail down the relationship between dwell time, scan rate, and exposure time, and to set them all with one or two parameters.
We only have a single relevant parameter in the GUI at the moment - dwell time:
If we need to take up the spectrometer exposure time and scan rate from the IOC, we need to expand our ophyd device API further.
@slactjohnson - my knowledge of the above parameters is minimal. Can you confirm the relationship of the parameters as in the original request? And, in your opinion, should this be a thing that the scan GUI is in control of, or should we be adding this into the individual spectrometer device definitions in pcdsdevices?
The Qmini can either be soft triggered or HW triggered. Either way, the spectrometer waits for the trigger, and then once it receives one it will start the exposure, which runs for the exposure time. The IOC polls the spectrometer and waits for the spectrometer to report that it has an available spectrum. Once the spectrometer reports that it has completed the acquisition, the IOC then retrieves the spectrum from the device.
Since the Qmini could be using a TPR, EVR, or soft triggering, I think doing this programmatically in the dispersion scan code could be a bit tricky. What do you think @klauer? @brittonm?
I would be happy to add extensive support for detector control/initial configuration, but I would need a lot more in the way of guidance and information to successfully implement this to your satisfaction. (Again, as an engineer making a simple GUI I don't have sufficient background here clearly...)
I'd like to know that we wouldn't be painting ourselves into a corner not looking beyond the devices and configurations that we're designing this for currently.
I guess what I was getting after is: what is the cost-benefit tradeoff? "Tricky" means maybe complicated to implement, and on top of that, the support would likely be quite specific to the Qmini. The current implementation is device agnostic, although a bit empirical.
What do you think, @brittonm? It's your charge code. :)
I may not fully understand and am not sure what the best approach is here, but it may help to more clearly state our goals:
As it stands, I think the spectrometer collects spectra continuously every scan rate
amount of time, and each spectrum is collected over a period of exposure
amount of time. To collect data quickly, we were asking that scan rate
be adjusted to slightly longer than exposure
(e.g. 1.1x). This minimizes the downtime of the spectrometer but may consume a lot of resources if it always runs that way (e.g. a 10 ms exposure
with an 11 ms scan rate
seems demanding), so it may make sense to switch scan rate
back to something less frequent afterwards (e.g. 100 ms scan rate).
For example, right now we set scan rate
manually and the minimum value is 100 ms. Sometimes we want to take ~10000 spectra with ~1 ms exposure
, which would take 10 s if only limited by the spectrometer exposure. Since we are limited by the 100 ms scan rate, it instead takes >15 minutes.
To ensure that the spectrum is not collected while the stage is in motion, it would be nice to ignore the first spectrum and take the second one after the stage finishes moving. Based on the above, it sounds like dwell time
is trying to handle this? If I understand correctly, we just need dwell time = scan rate
.
Hmm. I see what you mean now. Perhaps we can achieve what you want in a different way. The qmini does provide a way to average multiple spectra into a single spectrum. This is done with the <BASEPV>:SET_AVG_CNT
PV. I've never used this, but perhaps this will help? This PV has yet to be added to the Python class, sadly.
EDIT - Clarifying a bit more: with an exposure of 1ms, I think you could set SET_AVG_CNT
to e.g. ~90 (I'm assuming there's a little overhead to averaging), with a scan rate of 100ms and effectively get ~900 spectra/second.
Here's the Qmini protocol manual if you want to see their (lack of) documentation on various features, including averaging. NioLink Protocol Manual.pdf
We actually prefer averaging scans rather than spectra because the fluctuations are correlated on fast and slow timescales (i.e. we get much better signal-to-noise by taking one spectra at every stage position twice and averaging them than taking two spectra at every stage position). The lack of documentation is another reason to avoid the qmini averaging, I think. It would be better to minimize the delay after moving the stage if possible.
Based on our conversation today, a first iteration of this would be to use a spectrum-on-demand collection scheme. This would also potentially resolve issue #23
Expected Behavior
From Mat: "It would be great to nail down the relationship between dwell time, scan rate, and exposure time, and to set them all with one or two parameters. I think: exposure time is the amount of time (or laser shots equivalently) over which a single spectrum is collected, scan rate is the time between the start of collection for each spectrum, and dwell time is the amount of time to pause at each stage position to collect data. If that is correct, then naively, scan rate = 1.1 exposure time and dwell time = number of averages scan rate."
"It would also be nice to ignore the first spectrum at each stage position, in case it was partially collected while the stage was moving or at the previous stage position."
Context
Mat's been using this code for a while, and sent me his thoughts on things to improve.