Open larsoner opened 8 years ago
Chime in with ideas and I'll keep the issue up to date (and hopefully implement for aud_att_int
)
Would prefer recordings of spoken "left," "right," and "center" for auditory side test.
Ross Maddox, Ph.D. Postdoctoral Fellow Institute for Learning & Brain Sciences University of Washington phone: 206-685-4662 http://faculty.washington.edu/rkmaddox/
On Thu, Apr 28, 2016 at 11:42 AM, Eric Larson notifications@github.com wrote:
Chime in with ideas and I'll keep the issue up to date (and hopefully implement for aud_att_int)
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/LABSN/expyfun/issues/279#issuecomment-215523759
Then poll Yes/No if they actually came from left/center/right?
Yep.
Ross Maddox, Ph.D. Postdoctoral Fellow Institute for Learning & Brain Sciences University of Washington phone: 206-685-4662 http://faculty.washington.edu/rkmaddox/
On Thu, Apr 28, 2016 at 11:54 AM, Eric Larson notifications@github.com wrote:
Then poll Yes/No if they actually came from left/center/right?
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/LABSN/expyfun/issues/279#issuecomment-215527322
I am making those sounds now.
Ross Maddox, Ph.D. Postdoctoral Fellow Institute for Learning & Brain Sciences University of Washington phone: 206-685-4662 http://faculty.washington.edu/rkmaddox/
On Thu, Apr 28, 2016 at 11:46 AM, Ross Maddox rkmaddox@uw.edu wrote:
Would prefer recordings of spoken "left," "right," and "center" for auditory side test.
Ross Maddox, Ph.D. Postdoctoral Fellow Institute for Learning & Brain Sciences University of Washington phone: 206-685-4662 http://faculty.washington.edu/rkmaddox/
On Thu, Apr 28, 2016 at 11:42 AM, Eric Larson notifications@github.com wrote:
Chime in with ideas and I'll keep the issue up to date (and hopefully implement for aud_att_int)
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/LABSN/expyfun/issues/279#issuecomment-215523759
Great, thanks. Any other checks you can think of that you've done before or might want going forward?
Do we need an audio record check (at least get an acknowledgment somewhere that verbal response is being recorded) now that we have this for head/foot?
Thanks for implementing these things… it’s probably good to make it available for the MEG center as well...
Cheers KC
Adrian KC Lee, ScD Associate Professor Department of Speech and Hearing Sciences Institute for Learning & Brain Sciences (I-LABS) Portage Bay Bldg. Room 206 University of Washington, Box 357988 Seattle, WA 98195-7988
Phone: (206) 616-0102; Fax: (206) 221-6472 Email: akclee@uw.edu Web: www.akclee.com
On Apr 28, 2016, at 11:59 AM, Eric Larson notifications@github.com wrote:
Great, thanks. Any other checks you can think of that you've done before or might want going forward?
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub
Good call, I'll add something sensible for a verbal response check.
don't forget "blink three times"
There are some things we should spend a couple of minutes testing in most experiments to save ourselves some pain. These can be optional by kwarg, but here are the ones I can think quickly:
[{1: 1, 2: 2, 3: 3}] * 2
, for Ross's stuff it might be[{1: 'Yes', 2: 'No'}] * 2
.Anything else? cc @drammock @rkmaddox @akclee