Closed faymanns closed 7 years ago
I guess you mean significant, not finite ;) But is it really significant? I seem to recall the trigger pulse duration is 10µs - two orders of magnitude below our required precision for the stimulus structure, and four orders of magnitude below our required precision for the onset. If this really is the case we can just treat it as an instantaneous event at the onset.
Having said that, I am also curious what operations exactly you would want to execute between the trigger and the stimulation onset. Ideally we would need to make sure that these operations take no longer than 100ms, though it might be better to go for not longer than 1ms. Can that be done?
The operations include the loop over the events and some time calculations. Also, I think it would be a good idea to disable interrupts while a sequence is delivered. It takes about 380us from the time the trigger is received till the first laser pulse is delivered. As you said, it is probably not relevant for the experiment but I would like to fix a time anyway. As is an onset time of 0ms for the first event causes an error, because of the ~380us delay. My idea would be to always start the sequence 1ms after the trigger was received. This should be plenty of time to complete the operations.
Sounds like a good idea. But if we have a 1ms delay from trigger to onset, we would need to make note of that and add it to the event timing of the stimulation train report which the COSplay system pipes back to the scanner computer.
Since our TR is commonly in the second range, this would make the timing values look very clunky. I see a few options here:
1) Set the onset delay to 1ms and just ignore it (since our TR is commonly at least two orders of magnitude larger) - this may, however, restrict the suitability of our system for more highly resolved data (e.g. optical measurements @felixsc1 ? )
2) Set the onset delay to 1ms and just ignore it, unless it is less than one order of magnitude smaller than the TR - then and only then add it to the output.
3) Set the onset delay to 1TR and add that value to the report
4) Configure COSgen to start with at least one "baseline" TR, set the onset delay to 1 TR, and subtract 1TR from all the event times. This seems more roundabout, but it would prevent the "report" sequence piped back to the scanner from being unidentical to any of the sequences presented to the COSplay system.
I think I am leaning towards (2) but I'm still unsure which approach is best. What do you think?
Where do we stand on this?
The timing is with respect to the falling edge of the TTL pulse. As is right now, the first pulse can start approximately 76us after receiving the trigger. Therefore, an error message is issued if the accuracy is set to microseconds and the onset of the first event is smaller than ~76us. For millisecond accuracy the delay is simply ignored, as it is too small.
The trigger signal has a finite duration, therefore I was wondering whether the image acquisition starts at the beginning or end or the trigger signal. There are a number of operation that need to be executed after receiving the trigger but before the sequence can start. I think we should fix a time limit for these. If the image acquisition starts at the end of the trigger signal, the trigger duration could be a good choice.