Closed aortega255 closed 7 months ago
I can provide some more elaboration on this as I have discussed this a little bit with @aortega255. SNIRF was originally designed as a file format for storing acquired data and associated supporting data like the probe information, and in this case I haven't seen a case yet where SNIRF would have to probe what is nominally hardware specific information for how the measurements were multiplexed (in time and frequency) in order to acquired all of the measurements. If one wanted to maintain the exact acquisition time, for instance in the case of time multiplexing, one can use different data blocks to record the data acquired in the different temporally multiplexed states in which each data block has its own time base.
But, we at BU are now beginning to use SNIRF to store information about a designed probe that our acquisition software can then load and propagate to the acquired data snirf file. In the past, we were using our old Homer format .sd file for this. I can elaborate more on the benefits and motivation for moving to using a standard for providing this information to acquisition software rather than using a proprietary file format if someone wants me to. One aspect of this is that we convinced NIRx to take our probe information designed in AtlasViewer as input into their acquisition system so that we would not have to redesign our probes in their proprietary software. Correctly, they said they would only do this if it was in a standard format and not in our .sd format.
SNIRF already has
/nirs(i)/data(j)/measurementList(k)/moduleIndex
Description here.
It also has
/nirs(i)/data(j)/measurementList(k)/sourceModuleIndex
While I don't recall the exact reason these were added, I see that this could be used to specify a temporally multiplex state. That is, it could tell us for each measurement, an index indicating a state when that source is on, and an index for the state when the given measurement is recorded.
While ideally the user wouldn't have to figure out this temporally multiplexing arrangement themselves, at the moment at least, there does seem benefit to adding this capability to AtlasViewer when designing a probe and then passing that on to any acquisition software that supports smart temporal multiplexing, by which I mean that multiple sources are on during a given temporal state.
For now, I leave the discussion of whether or not we need to store frequency encoding states to another time.
With the passage of time, I really think this falls outside of the scope of the snirf spec. First, the exact timing of data acquisition can be handled by storing data in different data blocks each with its own time vector. Second, as for designing a probe that provides information to the acquisition system on how to multiplex the sources... well, that is outside of the scope of the snirf spec.
@dboas I agree with the above, but I think there is a situation where the mulitplex configuration is pertinent. Consider a system where two channels are acquired on a single detector, at the same time. If these channels have vastly different intensities it may be that the lower intensity channel's noise floor is elevated by the shot noise of the higher intensity channel. Knowing that this is the case permits one to properly weight each channel (e.g. through it's variance or covariance when performing estimation or image reconstruction).
Whilst this information could be conveyed implicitly by storing a multiplex index of some sort, I can see that a more explicit method would be to permit the specification of a per-channel noise floor somehwere else in the specification.
@samuelpowell , good point. And to add another situation that can arise, with temporal multiplexing, we can have a single channel of data acquired twice within one frame of data. My group already does this (as I suspect you and others) but multiplexing an LED at a high power for long separations and a low power for short separations. On the acquisition side, we just choose to use only the data at low power for the short separation and at high power for the long separation, but data is acquired for each channel at each power level. We just throw it out.
Would a solution for the above also solve this situation I bring up? I personally would really not like snirf to solve the situation I bring up because it means that the person who just wants to analyze their snirf file would first have to make decisions about how to handle this case where a given channel is sampled at both a low and high power before they proceed with analysis. Seems best to let vendors store that type of info in their own format and users do some preprocessing that then saves the result into a snirf format.
But the case you mention is different. SNIRF could potentially permit information about the multiplexing scheme to be stored that could either be completely ignored, or used to help understand potential excess shot-noise (as could happen with frequency encoding) or cross-talk (as can easily happen with spatial-temporal multiplexing).
It seems that would could figure out a good way to incorporate this multiplexing information into the probe
object. Do you want to suggest something?
I agree on the above. I've opened a new issue for future consideration and discussion.
This is what I think could be an issue in the near future with the snirf standard.
I don't see fields in the probe class to store information related to time or frequency multiplexing. This might be important or necessary for fNIRS hardware in which the position of the optodes is configurable, as the state and the frequency associated with each optode needs to be specified somewhere.
As we are moving to snirf from SD files to store and specify the probe, we will need a way to do this, at least for the ninjaNIRS system (but others systems might benefit too).