AllenInstitute / AllenSDK

code for reading and processing Allen Institute for Brain Science data
https://allensdk.readthedocs.io/en/latest/
Other
343 stars 149 forks source link

apply monitor delay to stimulus start times in the stimulus_presentations table #2217

Closed matchings closed 2 years ago

matchings commented 3 years ago

It appears that the display lag is not being applied to the stimulus_presentations table start_times. I believe that there should be an identical start_time in the stimulus_presentations table for every change_time in the trials table, but they are off by an amount roughly equal to the known display lag of the monitor.

Here is an example demonstrating that stimulus_presentations table start_times do not match trials table change_times: image (88)

The difference in the timing between the two tables is roughly equivalent to the known monitor delay: image (89)

I am surprised to see this because this issue has come up in the past and i thought it was resolved. Here is an old GitHub issues related to monitor delay: https://github.com/AllenInstitute/AllenSDK/issues/1318

also related: https://github.com/AllenInstitute/AllenSDK/issues/1876

AllenSDK 2.12.3

alexpiet commented 3 years ago

Some additional information on this issue.

The following is true on SDK 2.13.0

It looks like there is a difference between scientifica and mesoscope sessions. The scientifica sessions seem to have an average difference between the trials.change_time and stimulus_presentations.start_time of about 0.0045s. Mesoscope sessions seem to have an average difference of 0.019s

I made a function that randomly checks sessions and prints the ophys_experiment_id, equipment_name and the difference between the trials table and stimulus_presentations table.

In the file /allen/programs/braintv/workgroups/nc-ophys/alex.piet/SDK_QC/change_time_alignment.py

experiment_table = get_experiment_table() check_many(experiment_table, n=5)

Screen Shot 2021-10-05 at 5 47 42 PM

danielsf commented 3 years ago

@matchings @alexpiet

I just want to verify that the goal is to get stimulus_presentations[is_change==True].start_time and trials[stimulus_change==True].change_time to be identical.

When I naively apply monitor delay to the stimulus timestamps and run Alex's script, I get

951980486, MESO.1, -0.016682473118282266 +/- 2.212651376120347e-05
1083539714, MESO.1, -0.01668042016806352 +/- 2.1990383359356793e-05
894727297, CAM2P.5, -0.01667818181817467 +/- 3.722900789509714e-05
948704862, MESO.1, -0.01666608823529463 +/- 0.0002653837252610767
792813858, CAM2P.4, -0.016680970149232578 +/- 3.707333200508564e-05
868870085, MESO.1, -0.016677918215600246 +/- 3.0020699206514374e-05

Which is

alexpiet commented 3 years ago

Yes, @danielsf the goal is that stimulus_presentations[is_change==True].start_time and trials[stimulus_change==True].change_time should be identical.

is that number still the time from trials - time from stimulus? So -16ms means the stimulus presentation time is 16ms later than the trials table time?

I agree its both hopeful and disturbing!

danielsf commented 3 years ago

The only thing I changed about your script was that I added the +/- std(diff) to see how much the difference varies. So: yes: it is still trials minus stimulus.

I guess we have something to work on.

danielsf commented 3 years ago

@alexpiet @matchings

Should these two arrays (stim_change_frames and trial_change_frames) be identical?

stim = behavior_ophys_experiment.stimulus_presentations.copy()
trials = behavior_ophys_experiment.trials.copy()

stim_change_frames = stim[stim.is_change==True]['start_frame'].values
trial_change_frames = trials[trials.stimulus_change==True]['change_frame'].values

They are currently off by one (the trial change_frames are exactly one less than the stimulus start_frames). If I, for no good reason, increment the trial change frame by one, the timing discrepancy goes to zero identically. I just want to confirm that these two frame indexes are supposed to be the same thing before I diagnose exactly why the trials table has a different value from the stimulus_presentations table.

danielsf commented 3 years ago

Current best working theory is that there is no equivalent to this codeblock

https://github.com/AllenInstitute/AllenSDK/blob/master/allensdk/brain_observatory/behavior/stimulus_processing.py#L444-L449

            for idx, (epoch_start, epoch_end,) in enumerate(draw_epochs):
                # visual stimulus doesn't actually change until start of
                # following frame, so we need to bump the
                # epoch_start & epoch_end to get the timing right
                epoch_start += 1
                epoch_end += 1

in the trials processing code, which is why the trial change_frames are always exactly one frame behind the stimulus start_frames.

Apparently, that codeblock was introduced in this commit

commit 5670f087291feef0f3dd8c7f36bd482356019df3 (HEAD)
Author: nicain <nicain.seattle@gmail.com>
Date:   Mon Feb 25 16:57:24 2019 -0800

    completed lick_times in trials.  Key bug in monitor delay
alexpiet commented 3 years ago

@alexpiet @matchings

Should these two arrays (stim_change_frames and trial_change_frames) be identical?

stim = behavior_ophys_experiment.stimulus_presentations.copy()
trials = behavior_ophys_experiment.trials.copy()

stim_change_frames = stim[stim.is_change==True]['start_frame'].values
trial_change_frames = trials[trials.stimulus_change==True]['change_frame'].values

They are currently off by one (the trial change_frames are exactly one less than the stimulus start_frames). If I, for no good reason, increment the trial change frame by one, the timing discrepancy goes to zero identically. I just want to confirm that these two frame indexes are supposed to be the same thing before I diagnose exactly why the trials table has a different value from the stimulus_presentations table.

I'm not sure what you mean by off by one. Do you mean that there are an unequal number of changes in the two tables? Or do you mean there are the same number of changes, but their timing is off by one?

I never deal with the "frame" columns, I always work with the "start_time" and "change_time". Im not totally sure what the frames are. I suspect those columns should be exactly the same

danielsf commented 3 years ago

I'm sure no one will be shocked to learn that the answer to this riddle involves inspecting the pickle file.

This is the pickle file I'm using

/allen/programs/braintv/production/visualbehavior/prod2/specimen_850862430/behavior_session_951520319/951410079.pkl

corresponding to the ophys experiment with ophys_experiment_id = 951980486

If you load the pickle file and look at the trial log, you can see the last 5 stimulus changes recorded like so

>>> import pandas as pd
>>> data = pd.read_pickle(path_to_pickle_file)
>>> data['items']['behavior']['trial_log'][-1]['stimulus_changes']
[(('im085', 'im085'), ('im065', 'im065'), 3907.062684612612, 233796)]
>>> data['items']['behavior']['trial_log'][-2]['stimulus_changes']
[(('im077', 'im077'), ('im085', 'im085'), 3898.8058059642453, 233301)]
>>> data['items']['behavior']['trial_log'][-3]['stimulus_changes']
[(('im085', 'im085'), ('im077', 'im077'), 3888.29714789606, 232671)]
>>> data['items']['behavior']['trial_log'][-4]['stimulus_changes']
[(('im066', 'im066'), ('im085', 'im085'), 3878.538949165118, 232086)]

Naively loading this experiment with BehaviorOphysExperiment.from_lims, I see the discrepancy of 1 between what the trials table records as the change frames and what the stimulus table records as the change frames.

trial change
[231636. 232086. 232671. 233301. 233796.]
stim change
[231637 232087 232672 233302 233797]

As you can see, the change frames recorded in the trial table agrees exactly with what is in the pickle file. The stimulus table's change frames have been advanced by +1

I am 85% sure that this is due to that block of code that I referenced above

https://github.com/AllenInstitute/AllenSDK/blob/master/allensdk/brain_observatory/behavior/stimulus_processing.py#L444-L449

So the question becomes.... was that the right thing to do and should I add something similar to our trials processing code?

@alexpiet The change_time and start_time columns are found by taking the change_frame and start_frame columns and using them as indexes to an array of stimulus_timestamps. Basically: we have two ways of tracking time in these data frames.

alexpiet commented 3 years ago

Current best working theory is that there is no equivalent to this codeblock

https://github.com/AllenInstitute/AllenSDK/blob/master/allensdk/brain_observatory/behavior/stimulus_processing.py#L444-L449

            for idx, (epoch_start, epoch_end,) in enumerate(draw_epochs):
                # visual stimulus doesn't actually change until start of
                # following frame, so we need to bump the
                # epoch_start & epoch_end to get the timing right
                epoch_start += 1
                epoch_end += 1

in the trials processing code, which is why the trial change_frames are always exactly one frame behind the stimulus start_frames.

Apparently, that codeblock was introduced in this commit

commit 5670f087291feef0f3dd8c7f36bd482356019df3 (HEAD)
Author: nicain <nicain.seattle@gmail.com>
Date:   Mon Feb 25 16:57:24 2019 -0800

    completed lick_times in trials.  Key bug in monitor delay

So that code block says that the time of the stimulus change is on the new frame (or next frame), so it advances the frame time by one. And that logic isnt happening in the trials table. I can't comment on whether that adjustment needs to happen, but it seems reasonable that if it happens for the stimulus table, then it should happen for the trials table too.

Does that make sense @matchings ?

Its unfortunate that a lot of the original team members have left this project, I'm sure Doug, Justin, or Nick could help us here.

danielsf commented 3 years ago

Is there an MPE engineer we should be asking about this?

danielsf commented 3 years ago

From Arielle Leon on MPE (over email)

“the visual stimulus doesn’t actually change until the start of the next frame” is true. If you are using the same indices for the trials data, then I would increment that frame as well.

I will add the +=1 logic to the trials processing module.

danielsf commented 3 years ago

@alexpiet @matchings

With the aim of minimizing the number of places where I have to update the trial processing code to reflect Arielle's message, I'm trying to clean out some of our obviously neglected and unused trials processing code. There are two dataframes that are currently generated by code in

allensdk.brain_observatory.behavior.trials_processing

that I suspect are unused, but I want to check with you before I delete them. They are

1) the extended trials dataframe defined here

https://github.com/AllenInstitute/AllenSDK/blob/master/allensdk/brain_observatory/behavior/trials_processing.py#L1022-L1166

2) the rolling performance dataframe defined here

https://github.com/AllenInstitute/AllenSDK/blob/master/allensdk/brain_observatory/behavior/trials_processing.py#L1240-L1348

Do we still need to provide/support those dataframes, or can I safely delete them?

alexpiet commented 3 years ago

@danielsf I do not use those functions, but I cant speak for everyone. Perhaps the best course of action is add a Depreciation warning, and then remove in a few weeks?

danielsf commented 3 years ago

I see your point. I really just want to rip this all out, though. The code is poorly documented and poorly tested, so I can't really tell what it is supposed to be doing. I worry that it will be a lot of work to bring it in line with the proposed change in this ticket (correctly applying monitor_delay and syncing up trial and stimulus processing) for something we don't actually want to support going forward.

alexpiet commented 3 years ago

That makes sense. I'm ok with removing it if @matchings approves

matchings commented 3 years ago

just catching up here, and i'm a little bit confused. Is the discrepancy between trials table start_timeand stimulus_presentations table start_time due to this off by one frame thing, or due to the monitor delay not being applied? If it is the off by one frame issue, does that mean that the monitor delay is being properly applied to both tables?

Regarding the 'extra' functions in trials_processing.py- are we positive these are unused? I believe there is an attribute of the behavior_ophys_experiment object (and behavior_session object) called get_rolling_performance_df, and if it uses this function to create it, then we need to keep it.

i think create_extended_trials can be deleted. that code looks very old and i dont think it adds much that isnt already in the regular trials table the SDK provides.

danielsf commented 3 years ago

@matchings You are correct that construct_rolling_peformance_df is used in BehaviorSession. Sorry about that (in my defense, I would have realized this when I deleted the code and ran our tests). I will keep that, but blow away the create_extended_trials table. Thanks.

just catching up here, and i'm a little bit confused. Is the discrepancy between trials table start_time and stimulus_presentations table start_time due to this off by one frame thing, or due to the monitor delay not being applied? If it is the off by one frame issue, does that mean that the monitor delay is being properly applied to both tables?

When you opened this issue, there were actually two problems: 1) monitor delay was not being properly applied to the stimulus table 2) the trials table and stimulus table's time arrays were off by one

Problem (1) hid problem (2) (sort of; the fact that the time discrepancy between the two tables was always close to but not exactly equal the monitor delay was an indication something fishy was going on). Once the monitor delay issue was fixed (the fix is in a PR, not yet merged), problem (2) became apparent. Whenever I ran Alex's code, the time arrays in stimulus and trials were always off by exactly 16 ms (which, I gather, is one timestep in the sync files).

Arielle has cleared up for me that the way we are handling timesteps in the stimulus table was correct and the trials table is what needs to be updated. Being lazy and not wanting to update 5 different bits of code, I did my best to clean up our trials processing code, since a lot of redundant functionality survived the recent refactor. That is what prompted my message this morning about clearing out the old functions.

I think I know what to do now (keep the rolling DF, remove the extended DF), so I will do it. I'll let you know when the PR is ready for a test drive.

matchings commented 3 years ago

@danielsf that was a super helpful explanation, thanks! i am on board with your plan

danielsf commented 2 years ago

Wow... that should not have taken so long. Anyway: here is the branch that should fix this problem

https://github.com/AllenInstitute/AllenSDK/tree/ticket/2217/monitor/delay

If you have any quick tests that you want to run before we merge, have at it. Thanks. @alexpiet @matchings

danielsf commented 2 years ago

Note: the dataframes that had the discrepancy are all written directly to the NWB files so, if you try to instantiate a session from_nwb, it will still appear broken. You'll need to instantiate the sessions from_lims to see the fix.

Pika is aware that we are going to have to release new NWB files sometime in the near future. We are trying to pool all of the issues that require re-release so we aren't pushing out three new releases over the next four months.