jcouto / wfield

Tools to process widefield data
GNU General Public License v3.0
13 stars 14 forks source link

`trial_onsets.npy` structure #3

Closed oraby closed 3 years ago

oraby commented 3 years ago

I'm trying to run the pipeline on our data. Several command line arguments as well as functions require the trial_onsets.npy file. As our trial structure is bit different, I'm trying to generate the file from our data, I found that this function creates the file: https://github.com/jcouto/wfield/blob/c9e38eb70b776b4322923b043a3b3d2dca4f4cef/wfield/io.py#L363 I have 2 questions:

  1. I can gather from that that the first parameter is the trial number, but not sure about the 2nd and the 3rd ones.
  2. Some notebooks use trial_onsets['iframe'] (e.g onsets = np.load(pjoin(localdisk,'trial_onsets.npy'))['iframe'] in approximate_svd notebook ) and I'm not sure how to create such structure, and whether this file is the same as the one created in the function above.

I have a semi-related question about frames_average.npy, I found one description for the file here: https://github.com/jcouto/wfield/blob/c9e38eb70b776b4322923b043a3b3d2dca4f4cef/wfield/decomposition.py#L113 I assume that this is the average of all the frames within the trial, rather than the whole video. In some functions however, I interpreted the structure as the average of each trial rather than the average of all the trials.

jcouto commented 3 years ago

Hi, thanks for the question and interest.

  1. The first column is the trial number, the second is index of the first frame in each trial, the third is the index of the last frame in each trial (not used at the moment). Please let me know if that makes sense and works for you. I am happy to help making this more general or providing examples that are more clear.
  2. sorry about the confusion, that was an np.recarray in an earlier version of the package. I updated the examples. Thanks for finding this! This is how you could make a recarray if you wanted though: np.array([(1,0,3), (2,4,5)], dtype=[('itrial', '<i'), ('onset_frame', '<i'),('offset_frame', '<i')])

About frames_average: when doing the decomposition, the raw video frames are subtracted and then divided by an image. I usually I use the mean of the frames in the baseline period of all trials. One can also use the average of all frames in the trial (i.e. including all trial epochs) but the values in that case are not in relation to baseline (obviously).

In some cases, subtracting the mean of frames in the baseline of each trial is useful, specially if there are slow fluctuations in overall fluorescence during the experiment (there is an example of how that can be done in the end of the script).

I updated some of the comments in the 'approximate_svd notebook to try to address this, I hope it helps! Please let me know if you have other questions.

oraby commented 3 years ago

Hi @jcouto , thanks for the quick feedback.

  1. .....
  2. .....

Yup, makes sense. Thanks for the clarification 🙏 .

About frames_average: when doing the decomposition, the raw video frames are subtracted and then divided by an image. I usually I use the mean of the frames in the baseline period of all trials. One can also use the average of all frames in the trial (i.e. including all trial epochs) but the values in that case are not in relation to baseline (obviously).

Understood. Unfortunately, frames_average_for_trials() function in its current form won't work for us, as the animal initiates the trial by moving, so I guess what I need to do instead is to find some 'rest frames' between the previous trial and the current trial and use those as our baseline for that trial or just add it to a stack of all baselines. I assume there are few pitfalls for using a global baseline (e.g bleaching), but I would think it all depends on the question one is trying to answer, and at my current state, I want to initially focus on getting the pipeline going.

I updated some of the comments in the 'approximate_svd notebook to try to address this, I hope it helps! Please let me know if you have other questions.

Thanks a lot. I might indeed come with more question later :). And thanks in general for the work you've done here and in the protocol paper, it really has helped us a lot.