isetbio / JWLOrientedGabor

Simulation of an experiment by Marisa Carrasco and Jon Winawer measuring orientation discrimination thresholds of an acrhomatic, peripheral Gabor
4 stars 2 forks source link

Stimulus parameters #1

Closed jamesgolden1 closed 8 years ago

jamesgolden1 commented 8 years ago

Jonathan, what do you have in mind for the stimulus parameters? My sense from our call is that the basic idea is a Gabor or grating patch at a range of orientations, and you're interested in the effects of stimulus size/spatial pooling over receptors, eccentricity and polar angle.

Do the stimuli change temporally, or is the only change over time due to eye movements? We have an implementation of both together, thanks to @xnieamo, so that is possible for us to do as well.

Also tagging @wandell and @hjiang36 just in case.

JWinawer commented 8 years ago

Hi James,

The stimuli are static (they come on as a temporal step and go off as a temporal step, within the limits of the CRT display) . The stimuli are achromatic Gabor patches, tilted either 20 deg to the left of vertical or 20 deg to the right of vertical. We usually test them at 6 deg eccentricity, above, below, to the left, or the right of the fovea. And I think they are about 3 cpd windowed in a Gaussian with a std of about 1 deg. I can look all this up, but for starters we probably don't have to get these quantitative values correct - just the general experimental pipeline. We have eye position at 1 ms sampling.

Jon

On Fri, Jan 22, 2016 at 6:44 PM, jamesgolden1 notifications@github.com wrote:

Jonathan, what do you have in mind for the stimulus parameters? My sense from our call is that the basic idea is a Gabor or grating patch at a range of orientations, and you're interested in the effects of stimulus size/spatial pooling over receptors, eccentricity and polar angle.

Do the stimuli change temporally, or is the only change over time due to eye movements? We have an implementation of both together, thanks to @xnieamo https://github.com/xnieamo, so that is possible for us to do as well.

Also tagging @wandell https://github.com/wandell and @hjiang36 https://github.com/hjiang36 just in case.

— Reply to this email directly or view it on GitHub https://github.com/isetbio/JWLOrientedGabor/issues/1.

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

jamesgolden1 commented 8 years ago

Jon, thanks - I added a new tutorial that is a rough start. Among the changes that have to be made:

  1. Something strange is happening with the phase of the Gabor between orientations - I'm not sure why.
  2. No eye movements yet added! I tried but got an error so I gave up.
  3. Need to specify eccentricity - this is in some commented code that can pretty easily be incorporated.
hjiang36 commented 8 years ago

I added two lines of code to t_orientedGaborDiscrimination to incorporate some random eye movement.

The eye movement are generated by random Gaussian in this tutorial and it can be replaced with real measurement data. If simulated eye movement path is preferred, you can have a look at eyemoveInit.

jamesgolden1 commented 8 years ago

I made a couple more changes and I think we have the basic framework in place. I made the two orientations 90 degrees apart and the Gabor patches quite large just to check that the classifier can separate the two, and it looks alright for now.

One other thing to consider is which of the two cone response models we should use - for Fred's experiment they were working with low-luminance stimuli, and they used the linear model, but he also has a model for higher luminances that we could use.

Jonathan, let me know what you think, and I could do a google hangout any time if you'd like.

JWinawer commented 8 years ago

Thanks! (Also to Haomiao for implementing the eye movements.)

Cone models for higher luminance would be more appropriate. I will be occupied for most of the afternoon, but will run the code and get back to you later today.

Jon

On Mon, Jan 25, 2016 at 1:14 PM, jamesgolden1 notifications@github.com wrote:

I made a couple more changes and I think we have the basic framework in place. I made the two orientations 90 degrees apart and the Gabor patches quite large just to check that the classifier can separate the two, and it looks alright for now.

One other thing to consider is which of the two cone response models we should use - for Fred's experiment they were working with low-luminance stimuli, and they used the linear model, but he also has a model for higher luminances that we could use.

Jonathan, let me know what you think, and I could do a google hangout any time if you'd like.

— Reply to this email directly or view it on GitHub https://github.com/isetbio/JWLOrientedGabor/issues/1#issuecomment-174608021 .

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

jamesgolden1 commented 8 years ago

A few more updates are in place - I had forgotten to remove the drifting aspect over time, but that is fixed.

orientation_discrimination_lmspooled

orientation_discrim_threshold1

JWinawer commented 8 years ago

Hi all,

I made several updates to this code, including (1) reducing experiment to a single location and a single contrast value (retaining two orientations) (2) building the sensor prior to the loop over trials so that the identical sensor is used on all trials (3) simulating cone responses with noise (rather than a noiseless mean with noise then added for repeated trials) (4) adding different eye movement traces to each trial (which explains 3 above) (5) not combining cone signals before classification

One question: Is it possible to ensure large padding of the scene so that even with eye movements, the sensor FOV remains within the scene FOV?

Thanks, Jon

On Thu, Jan 28, 2016 at 12:59 PM, jamesgolden1 notifications@github.com wrote:

A few more updates are in place - I had forgotten to remove the drifting aspect over time, but that is fixed.

[image: orientation_discrim_threshold1] https://cloud.githubusercontent.com/assets/8620201/12653705/926bb19e-c5a5-11e5-9d9a-ea286051330e.png [image: orientation_discrimination_lmspooled] https://cloud.githubusercontent.com/assets/8620201/12653706/928093fc-c5a5-11e5-9c4f-e8cc100050cf.png

— Reply to this email directly or view it on GitHub https://github.com/isetbio/JWLOrientedGabor/issues/1#issuecomment-176306172 .

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

JWinawer commented 8 years ago

Never mind this previous question;

One question: Is it possible to ensure large padding of the scene so that

even with eye movements, the sensor FOV remains within the scene FOV?

The answer seems to be straightforward: The scene FOV and the sensor FOV can apparently be set independently.

On Tue, Feb 2, 2016 at 8:25 AM, Jonathan A Winawer <jonathan.winawer@nyu.edu

wrote:

Hi all,

I made several updates to this code, including (1) reducing experiment to a single location and a single contrast value (retaining two orientations) (2) building the sensor prior to the loop over trials so that the identical sensor is used on all trials (3) simulating cone responses with noise (rather than a noiseless mean with noise then added for repeated trials) (4) adding different eye movement traces to each trial (which explains 3 above) (5) not combining cone signals before classification

One question: Is it possible to ensure large padding of the scene so that even with eye movements, the sensor FOV remains within the scene FOV?

Thanks, Jon

On Thu, Jan 28, 2016 at 12:59 PM, jamesgolden1 notifications@github.com wrote:

A few more updates are in place - I had forgotten to remove the drifting aspect over time, but that is fixed.

[image: orientation_discrim_threshold1] https://cloud.githubusercontent.com/assets/8620201/12653705/926bb19e-c5a5-11e5-9d9a-ea286051330e.png [image: orientation_discrimination_lmspooled] https://cloud.githubusercontent.com/assets/8620201/12653706/928093fc-c5a5-11e5-9c4f-e8cc100050cf.png

— Reply to this email directly or view it on GitHub https://github.com/isetbio/JWLOrientedGabor/issues/1#issuecomment-176306172 .

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

jamesgolden1 commented 8 years ago

The updates look good. I also added a block of code that computes a spatially pooled RGC response - it is completely commented out right now, as a precaution because you must switch to the rgc branch of isetbio in order for it to run. I created a new type of rgc object, "rgcPool", which only computes a spatial convolution. I thought I would give this a shot, but we can also just directly program the convolution as an alternative.

JWinawer commented 8 years ago

Thanks. Can you pls confirm that this code was pushed? (I do not see it in the repository).

-Jon

JWinawer commented 8 years ago

Ah, never find. I see your commits. Thanks. I will close this issue and look at your additions using hte rgc branch of ISETBIO.

JWinawer commented 8 years ago

Thanks for help James. In the end I went with a simple 2D center-surround convolution of cone outputs for a quick and dirty RGC calculation for reasons of computational efficiency and because the rgc branch of isetbio is still in development. The project worked for my purposes, which was to demonstrate by example that classification accuracy on an orientation discrimination task, under some set of parameters, is dependent on the convergence of cones to RGCs. I cleared out clutter in the folder and saved out a published version of the final script that I used. Thanks again for the help.

-Jon

On Wed, Feb 3, 2016 at 3:39 AM, jamesgolden1 notifications@github.com wrote:

The updates look good. I also added a block of code that computes a spatially pooled RGC response - it is completely commented out right now, as a precaution because you must switch to the rgc branch of isetbio in order for it to run. I created a new type of rgc object, "rgcPool", which only computes a spatial convolution. I thought I would give this a shot, but we can also just directly program the convolution as an alternative.

— Reply to this email directly or view it on GitHub https://github.com/isetbio/JWLOrientedGabor/issues/1#issuecomment-179092367 .

Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/