cpp-lln-lab / localizer_auditory_motion

1 stars 6 forks source link

add template of "a method section" describing each localizer #31

Open Remi-Gau opened 3 years ago

Remi-Gau commented 3 years ago

From Moh's: https://sci-hub.st/downloads/2020-07-29/83/rezk2020.pdf#page=13&zoom=100,0,133

To provide an externalized and ecological sensation of auditory motion and accurate motion localization in the MRI, we recorded the
auditory stimuli using in-ear binaural microphones [65] in a semi-anechoic room for each subject individually prior to the scanning
session (Zoom H4n digital wave recorder – 200 m, with microphones Master Series - Sound Professionals-TFB-2). Participants
were positioned at the center of the sound setup with their head on a chin-rest, facing one vertical and one horizontal semi-circular
sound bar of 31 speakers each. The sound bars had a radius of 1.1 m and provided a fixed distance (1.1 m) between each speaker
and the participant’s head (Figure 4B). The position of the horizontal bar was at the same level of the subject’s ear level, and the position of the vertical bar was at the participant’s mid-sagittal plane. Pink noise (duration 1.2 s, fade in/out of 50 ms each) was divided in
31 equal segments and was replayed sequentially in the 31 corresponding speakers, to create smooth sensation of motion (no gap or
overlap between the different segments). The sound’s level was constant at 65 dB from the subject’s head position. Four translational
motion directions were recorded [upward, downward, rightward, and leftward]. Motion speed was 2 m/s and covered 120 of the
subject’s peripheral space. For the target detection task, similar motion stimuli with faster speed (4 m/s) and shorter duration (0.6
s) were recorded. An additional static condition was recorded at the central speaker, located at the intersection of the horizontal
and vertical planes of the sound bars. Static events had a duration of 1.2 s for the normal event, and 0.6 s for the target event.
The recordings from each participant were re-played inside the MRI scanner for the auditory motion localizer and the directional motion decoding experiment. By using such sound system and in-ear recording in each subject, the auditory stimuli are convolved with
each individuals’ own pinna and head related transfer function producing a vivid auditory perception of the external space.
A localizer was implemented to define regions responding preferentially to auditory motion sounds. Previous studies demonstrated
that regions within the middle temporal cortex and PT are selectively recruited during auditory motion processing compared with
static sounds [10, 22, 36, 68–70]. We used an experimental design matching the one implemented for the visual motion localizer.
The participants were blindfolded during the auditory motion localizer. The run started with an initial 5 s of silence and ended with
11 s of silence. The localizer run had 13 blocks of auditory motion and 13 blocks of auditory static conditions. Auditory blocks
were separated by an interval of 6 s. Each block had 12 events of 1.2 s, and an ISI of 0.1 s. In the motion blocks, auditory motion
stimuli were presented in one of four directions [upward, downward, rightward, leftward] (Figure S2). Each motion block had 3 repetitions of each motion directions. The presentation order of the different auditory motion directions within each block was randomized and balanced across blocks. The static blocks had 12 stimuli of static events separated by the 0.1 s inter-trial interval. Participants were asked to detect target events that were faster in speed and shorter in duration (0.6 s [8]. The number of targets ranged
number of targets present (range 14.4 - 15.6 s). Auditory stimuli were delivered through a SereneSound MR-compatible in-ear headphones inside the scanner. The participants performed the task while the fMRI data were acquired with an accuracy (mean ± SD) of
86.96% ± 14.71%.

between 1 and 3 targets in each block and was balanced across conditions. The duration of each block varied depending on the