Open alexpiet opened 5 months ago
Good point! We really should make sure events are synced and have expected distribution of time. Depending on what we want to start with, I have some draft code for:
However, these are not enough yet for testing the sync + relevant timing.
For the sync part, we need to come up with a test first, for example:
How about discussing this in our next meeting? (software meeting maybe?) I can come up with a draft of proposal by then. And I think this probably will need a team effort of hardware + behavior control + analysis capsule. (Maybe also RA's effort if we need to do some simple regular check. )
And for relevant timing distribution, how do you think of plugging in some basic quality check into the Streamlit app?
Thanks for taking charge of this @ZhixiaoSu. I added a discussion point to the agenda for the next software meeting (05/13). A proposal by then would be great. It would be good to think about (1) what we want to test, (2) how often, (3) who does the testing (automated, RAs, scientists, etc), (4) where relevant testing code runs/lives (on the rigs, CO, streamlit)
@bruno-f-cruz mentioned that some testing could be common to all tasks if we (1) have standard logging practices for Harp messages, (2) have common task elements. For example, lick sensors or solenoids should have common testing code, but internal task timing elements should probably be task specific. You should touch base with him, and assume we agree/implement a standard harp logging practice in the short term future.
I can come up with a proposal by next Monday! I think it will be more like a proposal of discussion since we first need to define the task and thus figure out its scale, which I think should be the goal of the discussion. I'll consult Bruno first to make sure I don't miss important parts.
My preference is to solve this more holistically by coming up with a standard for video data collection. We can start by discussing the current implementations and what quality control checks should be in place, but we should make something that leverages a common standard so everyone can use it.
@ZhixiaoSu Did you create a proposal for testing things like task events? It seems like video data is being handled separately.
Should develop a CO capsule that tests the timing of relevant events in the task and looks for any discrepencies from expectation. @ZhixiaoSu want to take a stab at this?