Closed pmldrmota closed 7 months ago
Thanks! This has been on my list as well – displaying the wrong source id is a high-priority bug, as it is a data traceability issues!
Looking at the rest of data_changed
, wouldn't we also need to completely re-initialise the series/online analyses? Also, in theory one run could be a scan, the other a Repeat non-scan (necessitating a SubscriberSinglePointModel
vs SubscriberScanModel
switch).
I believe my original intention here was not to deal with all that complexity in the UI code, but have the model layer just completely replace the Model
if the same prefix was reused. Clearly, I failed to implement that in the SubscriberRoot
code.
I believe my original intention here was not to deal with all that complexity in the UI code, but have the model layer just completely replace the Model if the same prefix was reused.
Is this still your preferred approach? What would be the indication that a new scan has been issued? Should the TopLevelRunner
also broadcast the date/time when the experiment was run? Or just use seed
?
Is this still your preferred approach?
It seems cleaner, as otherwise you might e.g. get a brief moment where the old data including annotations is still displayed but the source_id has already been updated (might be quite long for subscans, probably until the first complete point comes in (?)).
Not sure what the best marker of a new scan would be; perhaps something like ndscan_schema_revision. Separating things like that cleanly was a large part of my initial motivation for the server-side dataset namespacing. Without it, we might need to manually handle this in the subscriber code (such that the old data isn't used to get the schemata/points immediately after the refresh).
Another option would be to give up and accept that there might sometimes be mixed states. If the user explicitly specifies the same top-level runner dataset root from multiple experiments (can be multiple instances of the same experiment), there is nothing we can do to avoid the mess on the applet side anyway.
Does sipyco emits mods
when datasets are set_dataset()
from a new experiment, even if the value has not changed?
Then a clean solution would be to designate one dataset that takes the role of begininng a new scan (e.g. ndscan_schema_revision
), and when this is observed in the subscriber code, chuck away all the current datasets and append the new datasets in only as they come in.
(The problem to avoid here is to avoid refreshing the scan after a new one has begun, but immediately load in all the new data that might still be present.)
Merged this anyway to avoid more incorrect source_id
s making it into lab book entries, but I'd still like to address this more properly/generally: https://github.com/OxfordIonTrapGroup/ndscan/issues/387
Without this, the applet of a
TopLevelRunner
with a constant dataset prefix doesn't update thesource_id
or the title or the cached data (e.g. averaged points) upon resubmitting the next scan. Internal example: SPAM measurement.