ContextLab / attention-memory-task

An experiment used to explore interactions between covert attention and recognition memory
Other
9 stars 5 forks source link

Minimalist updates #95

Open KirstensGitHub opened 9 months ago

KirstensGitHub commented 9 months ago

Hi Jeremy,

I've been looking over the AM code and paper this weekend. I started by reviewing my high-level changes to the code and text. However, the high-level code changes I made were leveraging the latest helper functions. Upon review, I wonder if some of the helper functions are working differently than intended.

For example, I can download the latest repository from CDL github, make no changes, and run the analyses.ipynb notebook. This gives me the figures and results in the paper.

I can then make one change to the notebook: substitute the sustained / variable data with the sustained_unfiltered / variable_unfiltered data. I can then run the notebook again. This gives the same figures and results for all of the memory analyses.

It looks like this happens because the memory data never gets filtered.

The helper functions remove presentation trials where the person looked at the image. They do not remove the corresponding memory trials with those images.

Here are some quick & easy checks:

Screenshot 2023-12-31 at 9 08 51 PM Screenshot 2023-12-31 at 8 55 57 PM

To make sure we're on the same page before proceeding, I've pushed a minimalist batch of updates. They achieve the following:

[1a] filter the presentation trials based on subjects' gaze
    note: I save out a few added details about the gaze 

[1b] filter the corresponding memory trials

[2] remove duplicate rows from all gaze data

[3] update the conversions from degrees-visual-angle to centimeters 
   note: I think conversion factor might currently be flipped ?

Let me know if these make sense or if anything looks off. I think it will be important to get these fundamentals cleared up before I pull request anything else. Thanks and happy 2024!

jeremymanning commented 9 months ago

Hey @KirstensGitHub the issue is that you're using pre-filtered data rather than re-filtering it. The dataset is too large to store in GitHub, so when you run the analysis notebooks (e.g., analysis.ipynb), the get_data function (defined in helpers.py) downloads the dataset to a newly created data folder.

Once you've run the main analyses (to generate the original figures in the paper), you'll see a file gaze_data.pkl that gets created in that same data folder. Because that filtering process can take a while to complete, the next time the data get loaded/filtered, it uses that pickled file rather than re-computing everything. If you don't delete the existing file or change the filename where those pre-computed filtered data are stored (inside load_data, defined in helpers.py), you'll just end up loading the old results (so nothing will change, even if you change the filtering parameters).

If you only want the raw data, without any filtering, you'd need to modify the load_data function.

It's of course always possible that I've messed something up, so feel free to dig further and/or ask other questions! I do think the code is organized pretty nicely, so I'd advise against any substantial changes to the organization or how the code is written.

I also see you've updated the image sizes, which are used to compute the gaze intersections with the stimuli. If the new values are correct, you'll definitely want to re-compute the intersections with those updated sizes. That said, I remember having a few conversations about the image sizes, and we had double checked those numbers before running the analyses last time. I think those numbers also matched our prior submissions. Did you uncover new info there?