The goal of meeting today is to overview two collaborative projects, determine rough timelines for each, and identify who is interested in being a part of one/both projects.
Once we identify one or more working groups, will proceed with regular meetings for this/these smaller groups
Builds on prior work demonstrating changes in post-error visual processing and/or behavior, as a function of the time allotted between trials: (Buzzell et al., 2017; also: Beatty, Buzzell et al., 2019)
Modify task to include faces that vary in emotion. Participants must discriminate emotional expression. Key question is whether the encoding of emotions differs post-error vs post-correct, and further, whether this interacts with time allotted.
Long term goal is to study a similar mechanism in the development of social anxiety. Thus, need to be mindful of paradigm design that is suitable for 10–18-year-olds
This project can be viewed as a high-risk, high-reward, “real-world” extension of project 1. The goal is to assess whether emotion encoding of a conversational partner differs after making an error in conversation, vs other random time points
Involves, at a minimum, dual-EEG, and high-def cameras. May also include dual eye-tracking and facial EMG
Lots of setup.
Data mining
Identify one/more working groups for one/both projects
Identify rough timeline for each project
Pending working group members
Following the meeting, will follow up to identify meeting schedule for working group(s).
More details and initial notes/plan (pasted from email)
(1) Investigation of how face encoding changes following errors (study leveraging (f)MRI-constrained source modeling of EEG)
This project builds on a prior study published by George, showing that after detecting an error, there is a brief reduction in stimulus-evoked activity on the following trial when minimal time is allotted, but not when more time is allotted (Buzzell et al., 2017; also: Beatty, Buzzell et al., 2019)
This study will use a similar paradigm as previously employed in Buzzell et al., 2017, but modified to allow for studying the encoding of faces that vary in emotional valence. Note: the long term goal of this project is ultimately to study how this mechanism may differ in adolescents with social anxiety, hence the use of faces. The initial study will not focus on individual differences (though we will collect anxiety measures) and will just use undergrads.
Two-handed alternative forced choice task with ample time for making perceptual decision (i.e. not using a short RT deadline)
On each trial, two faces are presented, and the participant has to judge whether the FIRST STIMULUS was angrier/happier.
a. The two stimuli are presented relatively close in time (~500 ms apart?) but this inter-stimulus interval (ISI; time between face 1 and face 2) is jittered (e.g. 400-600; uniform distribution).
b. The judgement is focused on the first stimulus, as this is the one that will be impacted by error/correct responses on the prior trial
Fabian, I may have the exact details here wrong. Can you please confirm that we are indeed presenting two stimuli on each trial, and that it is OK to jitter the amount of time between the stimuli? Also, is it OK to always have the judgement be on the first stimulus? I remember discussing this in detail, and coming to a solution that would satisfy the needs of the encoding modeling framework while also satisfying the needs of the post-error analyses.
The Response-stimulus-interval (RSI; the amount of time between the response on the prior trial and the first stimulus on the next trial will vary between 200 ms and 1200 ms); we will (post-hoc) bin the trials into those with short or long RSIs and test whether an error/correct response on trial n interacts with the RSI preceding trial n+1 to predict differential encoding of the first stimulus on trial n+1.
a) We will further test whether the magnitude of the error signal on trial n (ERN or theta burst) moderates the relation between the RSI and stimulus encoding on trial n+1
b) We will further perform source modelling of the EEG data to extract source-localized EEG data from the fusiform (and related areas) on trial n+1. Note: to improve the fidelity of the source localization, we will collect high density EEG (128 chans), digitize the electrode locations, and also have the same participants participate in at least a short scanning session to obtain:
Whole-head T1 (must have cerebellum and neck; no chopping off top of head either)
Whole-head T2 (must have cerebellum and neck; no chopping off top of head either)
Functional localizer for faces
If possible (but definitely not crucial), it would be great to also obtain:
Diffusion data
Expanded set of localizers to look at lower-level visual areas as well
A short functional task that is similar to the task being performed during EEG
A short functional task that assesses a more “classic” cognitive control task (e.g. Flanker)
(2) Investigation of changes in face encoding in response to top-down signals, with within naturalistic environments
Conceptually, this project is a “long term” high risk/high reward extension of the work that both Fabian and George are currently pursuing. The goal is to study dynamic changes in cognitive control and socially-relevant sensory encoding (e.g. facial expressions) within dynamic, free-flowing, “real world” conversations between two or more participants. The project is partially hypothesis driven, but also, seeks to produce a dense dataset for data-mining and novel hypothesis generation.
One of the central questions of interest is: Within a natural conversation, how does the actor’s perception/encoding of social behavior of their partner change as a function of error detection on the part of the actor. That is, if I feel like I just made a mistake in speaking (internal feedback; e.g. self-detected error), or if I perceive information indicating that I just made a mistake in speaking (external feedback; e.g. a confused look on the part of the partner), how will this change the way I perceive behavior of the person I am talking too? Will I be more likely to encode the ongoing neutral facial expression of my partner as negative? In sum, one of the central questions of this study is to assess conceptually similar questions as those being investigated in the EEG/MRI source study, but to do so within a naturalistic environment (conversation).
The basic set up for the study is 100% subject to change. However, currently we basing the paradigm on two online studies that we are currently running.
The basic set up for the experiment is as follows:
Two participants seated at a table, facing each other. Phones are taken away beforehand.
Each participant is equipped with a pupilLabs eye tracker, face emg, 64 channel EEG
Two 4K 60fps camera are positioned, such that one is facing each participant
The table and backdrop of each participant is a green screen, direct lighting is also positioned to illuminate the faces
Participants engage in 3 periods of social interaction:
Completely unstructured initial encounter
Participants are fully set up with the equipment, and then the experimenter says that they need to finish setting something up in the other room and will be back in a few minutes; the two should “get to know each other” in the meantime. Left alone for 5 mins of unstructured conversation.
Ice-breaker task
Participants are each presented with a series of questions to ask the other person to help get to know each other. Participants alternate asking each other questions. Takes 10-15 mins
Note: we are currently collecting data for a similar task, but with a confederate
Oral test task
We should include a third task/structured interaction, not sure exactly what, but maybe something where each participant takes turns being presented with questions/problems from the other participant and they have to answer them. For example, participant A is given a list of word problems and their answers, they read one to participant B and then tell them if they are correct or not, then Participant A reads the next, and so on, until all questions have been read and attempted. Then, the roles switch.
The reasoning behind this task/interaction is that it presents a social interaction with “discrete trials” and explicit correct/incorrect external feedback.
The key issue is how to identify points in time when either a self-detected error occurs, or error feedback (from the partner) is presented.
The oral test task provides a “relatively” easier set of “trials” to analyze. We could time-lock EEG responses to the onset of feedback from the partner to extract feedback-related error signals and then test whether these modulate subsequent encoding.
The unstructured and icebreaker tasks are more difficult to analyze in terms of identifying moments when errors occur. However, there are multiple approaches that we are could pursue:
Both participants perform a standard cognitive control task (e.g. flanker) before the social interaction tasks. Then, after the experiment is over, we either extract an ICA component capturing errors in the flanker task, or build a classifier from these same data and then apply this to the EEG recorded during the icebreaker or unstructured tasks. Essentially, we are using a template of what errors look like in a standard cognitive control task to identify when error-like activity is present during the unstructured tasks. If successful (we are pursuing pilot studies to validate this fall/spring) then we could assess whether face encoding following these error events differ from other random points in the interaction.
Note that this approach requires first validating that the method works. Towards this end, this fall/spring we are collecting data that will allow us to validate the method.
Manually “micro-code” the social interaction videos
We would develop a coding protocol to identify points in time during the video recording when we can reasonably assume that either the participant made an error, or, feedback from the social partner would be reasonably interpreted as negative feedback. For example, evidence of an error would be that the participant stuttered, mispronounced a word, grimaced/looked confused, paused for longer than appropriate for the conversation flow, etc. Evidence of negative feedback from the social partner would be things like a confused or disapproving facial expression or hand gesture or an explicit statement that they disagree, etc.
Two independent coders would go through the video for each task and identify the timestamps of error/negative feedback events. The two independent coding sheets would be reconciled to produce a final code. Then, these timestamps would be used to insert markers into the ongoing EEG to look for the presence of error-related data, as well as to look at the post-error period for encoding changes.
Rough plan/timeline for the Error-related Face Encoding study
The goal is to target an NIMH R21 submission NO LATER THAN June 16.
In addition to planning for the grant submission, we also want to plan on being able to publish an initial paper in the absence of grant funding. This will be accomplished through a combination of leveraging participants that Fabian has already collected MRI data on (and more that he plans to collect on?) as well as by using the money George has set aside for collecting more fMRI data for this study.
Fabian already has 12 people that at least of T1s (T2 as well? OK if not, but better if you do!). They were collected in the summer, so, the immediate goal is to get the EEG study up and running so that we can call these people back and run them through the EEG study. These participants would not only serve as pilot participants for the study as a whole, and for the initial grant submission, but, when combined with further data collection, could be part of the first paper.
Fabian, can you please confirm if you are indeed already planning to collect more T1 and localizer data? If so, what is the timeline and final n for this collection?
Pending what your total n is for this ongoing data collection, it might be possible to have a full paper simply by re-running them through EEG and maybe only adding a few more additional participants. That would then allow us to use the MRI money I have set aside for a further follow up and/or a related collaborative project. Totally open here on what to do, just want to make sure we can maximize data collection.
Fabian can you please also provide a detailed list of exactly what you are collecting on this sample? The primary thing to know is whether you also are getting T2s, and to ensure that the T1s (and T2s?) you are getting are full head (including the NECK and no chopping the top of the head). However, I would also be curious what else you are getting (either neural, behavioral, or survey) on this sample, as we might be able to fold some of that data into the new study as well.
In general, the plan is to first collect EEG data from as many of the participant that Fabian has/will already scan. We will analyze these data to determine whether further changes are needed for the larger study, and also, to serve as pilot data for the grant submission. We will then proceed with running the larger study. Below is a proposed timeline, but no idea really if this is a good one or not. Relatedly, the timeline is only based on the 12 participants you already have data for, it does not include additional participants, nor new (f)MRI data collection.
Rough timeline (need to update)
Sep. 15:
Finish programming Psychopy experiment (Fabian/Emily (input from George on params))
Sep. 30:
Add EEG markers and internally pilot Psychopy-EEG experiment (Kia/Olivia/George)
Oct.1:
Start Re-contacting participants Fabian already has MRI data on and run them through the Psychopy-EEG experiment (Kia/Olivia (in coordination with Emily))
Oct.1 - Nov. 1:
data collection for participants that already have MRI (Kia/Olivia)
Note: data analysis will begin after the first participant and continue through data collection
Oct. 15 – Jan. 30:
Scalp and source level analyses of EEG data (Kia/Olivia/George)
Oct. 15 – Jan. 30:
Modeling of changes in behavior post error/correct (while waiting for source data) (Emily/Fabian)
Note: we will work to get data from at least one source-localized participant to you to allow you to begin work on the encoding model, the others may take longer to provide
Nov. 15 – Feb 15:
Encoding modeling of source-localized data from fusiform (Emily/Fabian)
Feb. 15 – March 1:
Statistical tests of full model (effects of errors and/or error signals on next trial phase encoding in source localized data)
March – April:
Write R21 submission
April – May:
Finalize Submission
Expected deadline: June 21 R21 submission
Rough plan/timeline for the naturalistic environment study (need to update)
In addition to issues of analysis methods (which I think are at least partially solvable, and would be fun to solve). This project will require a fair bit of time in terms of technical set up, syncing clocks, logistics of the social interactions, etc. In sum, I think this project will benefit from clarifying the plan a bit more, but then, just trying to set things up and play around with things a bit before determining the final protocol, timeline, analysis plans, etc.
I propose that we not worry about a specific timeline for this project just yet (but plan to nail one down in the coming month or two). For now, I think the key things to do are for our labs to start having some joint meetings to plan the project out further and start setting up the equipment. We also need to put in for an IRB.
September:
Joint lab meeting between Soto/Buzzell sometime in coming month for further brainstorming. This would be all of Soto and Buzzell labs.
Identify exactly who will be the initial team of students involved in the project and schedule a working group meeting that meets ever 2 weeks for the fall.
October-November:
Write a generic IRB. Start setting up equipment and troubleshooting/figuring out logistics. Start to refine protocol(s) and proposed methods.
By Oct. 15:
Submit IRB
By Dec. 15:
IRB approved. Most/all equipment set up and working. Basic protocol delineated. Analysis plans more refined.
Spring:
Light data collection and heavy focus on analysis methods/refinement. Revise protocol as necessary.
Summer:
Grant writing. Further refinement of methods, continued light data collection.
Fall 2022:
Grant submission. Protocols/methods refined. Begin larger scale data collection and continue to update grant with new data/results/methods
☝️ This is the document shown on-screen at the 09/17 joint lab meeting. I hastily moved it into markdown, so please adjust formatting as needed for clarity.
The goal of meeting today is to overview two collaborative projects, determine rough timelines for each, and identify who is interested in being a part of one/both projects.
Once we identify one or more working groups, will proceed with regular meetings for this/these smaller groups
Project 1: Post-error-face-encoding computer task involving (f)MRI/EEG-source Project 2: Post-error-face-encoding conversation project involving video, EEG (and potentially EMG and/or eye tracking)
Agenda
Overview post-error-face-encoding computer task project
Overview post-error-face-encoding conversation project
Identify one/more working groups for one/both projects
Identify rough timeline for each project
Following the meeting, will follow up to identify meeting schedule for working group(s).
More details and initial notes/plan (pasted from email)
(1) Investigation of how face encoding changes following errors (study leveraging (f)MRI-constrained source modeling of EEG)
(2) Investigation of changes in face encoding in response to top-down signals, with within naturalistic environments
Conceptually, this project is a “long term” high risk/high reward extension of the work that both Fabian and George are currently pursuing. The goal is to study dynamic changes in cognitive control and socially-relevant sensory encoding (e.g. facial expressions) within dynamic, free-flowing, “real world” conversations between two or more participants. The project is partially hypothesis driven, but also, seeks to produce a dense dataset for data-mining and novel hypothesis generation.
One of the central questions of interest is: Within a natural conversation, how does the actor’s perception/encoding of social behavior of their partner change as a function of error detection on the part of the actor. That is, if I feel like I just made a mistake in speaking (internal feedback; e.g. self-detected error), or if I perceive information indicating that I just made a mistake in speaking (external feedback; e.g. a confused look on the part of the partner), how will this change the way I perceive behavior of the person I am talking too? Will I be more likely to encode the ongoing neutral facial expression of my partner as negative? In sum, one of the central questions of this study is to assess conceptually similar questions as those being investigated in the EEG/MRI source study, but to do so within a naturalistic environment (conversation).
The basic set up for the study is 100% subject to change. However, currently we basing the paradigm on two online studies that we are currently running.
The basic set up for the experiment is as follows:
The key issue is how to identify points in time when either a self-detected error occurs, or error feedback (from the partner) is presented.
Rough plan/timeline for the Error-related Face Encoding study
The goal is to target an NIMH R21 submission NO LATER THAN June 16.
In addition to planning for the grant submission, we also want to plan on being able to publish an initial paper in the absence of grant funding. This will be accomplished through a combination of leveraging participants that Fabian has already collected MRI data on (and more that he plans to collect on?) as well as by using the money George has set aside for collecting more fMRI data for this study.
Fabian already has 12 people that at least of T1s (T2 as well? OK if not, but better if you do!). They were collected in the summer, so, the immediate goal is to get the EEG study up and running so that we can call these people back and run them through the EEG study. These participants would not only serve as pilot participants for the study as a whole, and for the initial grant submission, but, when combined with further data collection, could be part of the first paper.
In general, the plan is to first collect EEG data from as many of the participant that Fabian has/will already scan. We will analyze these data to determine whether further changes are needed for the larger study, and also, to serve as pilot data for the grant submission. We will then proceed with running the larger study. Below is a proposed timeline, but no idea really if this is a good one or not. Relatedly, the timeline is only based on the 12 participants you already have data for, it does not include additional participants, nor new (f)MRI data collection.
Rough timeline (need to update)
Sep. 15: Finish programming Psychopy experiment (Fabian/Emily (input from George on params))
Sep. 30: Add EEG markers and internally pilot Psychopy-EEG experiment (Kia/Olivia/George)
Oct.1: Start Re-contacting participants Fabian already has MRI data on and run them through the Psychopy-EEG experiment (Kia/Olivia (in coordination with Emily))
Oct.1 - Nov. 1: data collection for participants that already have MRI (Kia/Olivia)
Note: data analysis will begin after the first participant and continue through data collection
Oct. 15 – Jan. 30: Scalp and source level analyses of EEG data (Kia/Olivia/George)
Oct. 15 – Jan. 30: Modeling of changes in behavior post error/correct (while waiting for source data) (Emily/Fabian)
Note: we will work to get data from at least one source-localized participant to you to allow you to begin work on the encoding model, the others may take longer to provide
Nov. 15 – Feb 15: Encoding modeling of source-localized data from fusiform (Emily/Fabian)
Feb. 15 – March 1: Statistical tests of full model (effects of errors and/or error signals on next trial phase encoding in source localized data)
March – April: Write R21 submission
April – May: Finalize Submission
Expected deadline: June 21 R21 submission
Rough plan/timeline for the naturalistic environment study (need to update)
In addition to issues of analysis methods (which I think are at least partially solvable, and would be fun to solve). This project will require a fair bit of time in terms of technical set up, syncing clocks, logistics of the social interactions, etc. In sum, I think this project will benefit from clarifying the plan a bit more, but then, just trying to set things up and play around with things a bit before determining the final protocol, timeline, analysis plans, etc.
I propose that we not worry about a specific timeline for this project just yet (but plan to nail one down in the coming month or two). For now, I think the key things to do are for our labs to start having some joint meetings to plan the project out further and start setting up the equipment. We also need to put in for an IRB.
September: Joint lab meeting between Soto/Buzzell sometime in coming month for further brainstorming. This would be all of Soto and Buzzell labs.
October-November: Write a generic IRB. Start setting up equipment and troubleshooting/figuring out logistics. Start to refine protocol(s) and proposed methods.
By Oct. 15: Submit IRB
By Dec. 15: IRB approved. Most/all equipment set up and working. Basic protocol delineated. Analysis plans more refined.
Spring: Light data collection and heavy focus on analysis methods/refinement. Revise protocol as necessary.
Summer: Grant writing. Further refinement of methods, continued light data collection.
Fall 2022: Grant submission. Protocols/methods refined. Begin larger scale data collection and continue to update grant with new data/results/methods