As part of outlining the specs, I think it would be good to put together an actual set of instructions scripts that demonstrate the specs (almost like unit tests). In a magical world where it wouldn't be hard, it would be nice if we saved data from each run (like a scope picture, or better yet the timetraces themselves) that we can compare against when things are changed down the line. Having actual data/traces to show that demonstrate relevant specs is powerful to include in docs (and a paper more importantly).
Would also be nice to include scripts that highlight quirks/limitations/bugs of the implementation. Then we can have firm points to compare against when we either point out why someone's code isn't working as expected or we fix a bug.
I guess, I'm asking for some python scripts (at least) to start getting included into the repo as examples. If nothing else, it'll help me wrap my head around what it's like to actually work with this thing.
As part of outlining the specs, I think it would be good to put together an actual set of instructions scripts that demonstrate the specs (almost like unit tests). In a magical world where it wouldn't be hard, it would be nice if we saved data from each run (like a scope picture, or better yet the timetraces themselves) that we can compare against when things are changed down the line. Having actual data/traces to show that demonstrate relevant specs is powerful to include in docs (and a paper more importantly).
Would also be nice to include scripts that highlight quirks/limitations/bugs of the implementation. Then we can have firm points to compare against when we either point out why someone's code isn't working as expected or we fix a bug.
I guess, I'm asking for some python scripts (at least) to start getting included into the repo as examples. If nothing else, it'll help me wrap my head around what it's like to actually work with this thing.