ioam / topographica

A general-purpose neural simulator focusing on topographic maps.
topographica.org
BSD 3-Clause "New" or "Revised" License
53 stars 32 forks source link

how the density(size is 24) of retina changes to a bigger matrix(size is 78) #610

Closed youngjt closed 9 years ago

youngjt commented 9 years ago

We find that the density of retina ,LGN and V1 are 24,24 and 47,which are setted in the model of gcal.ty

But we find that the size of the matrix of Retina,LGN and V1 are 78,60 and 47,which are catched in the class of HSVBitmap.

Our question is how the density(size is 24) of retina changes to a bigger matrix(size is 78) ?

philippjfr commented 9 years ago

Densities of topographica Sheets are defined per sheet coordinate. This means that if you specify a sheet with area 1.0, the shape of that sheet will be equal to the density. However for various reasons, largely to do with edge effects, a V1 of area 1.0 requires the LGN sheets it samples from to be slightly larger, in this case the LGN has an area of 2.5. An area of 2.5 x 24 density, gives you a shape 60 matrix. Similarly since the LGN samples its input from the Retina, you need even more edge buffering, in this case that edge buffering results in a Retina of area 3.25 - again scaling that by the density gives us a shape 78x78 matrix. Have a look at our user manual here, which gives a good introduction to SheetCoordinateSystems.

youngjt commented 9 years ago

Thanks for your answering,I have got it.

youngjt commented 9 years ago

I have got other question suddenly.

I think the model of gcal.ty is like this, model

The size of V1 decides the size of LGNON,and the size of LGNON decides the size of the Retina. So the size of V1 plays a key role.

In the neural network of gcal, I guss the V1 layer has 47 neurons ,the layer of LGNON has 60 neurons,and the layer of Retina has 78 neurons.

Is that true?

Thanks.

jbednar commented 9 years ago

Your diagram looks good. The next step is to understand that the number of neurons isn't actually important. What your diagram shows is how things relate in "Sheet" coordinates, which are an abstract continuous space. E.g. for this visual system model, V1 might represent 1 degree of visual space, and then the LGN would be 1 degree plus some buffer area, and the retina would be 1 degree plus the LGN's buffer plus its own buffer. All of these calculations are done in a way that is completely independent of the number of neurons involved. Then, once the Sheet areas are established, the number of neurons in each sheet is computed by multiplying the density of that sheet times the area of that sheet. For gcal.ty, yes, if the cortex_density is set to 47, for the default V1 area of 1.0x1.0, you will indeed get a 47x47 V1. If you change cortex_density to 63, you'll get a 63x63 V1, with no change to the Sheet-coordinate area being represented (which is why it's a density parameter, not a size parameter). Similarly, if the lgn_density is set to 24, you'll get a 60x60 LGN if the lgn area is 2.5 (i.e, (2.5x24)x(2.5x24)), and the same for the retina but with whatever area the retina has. In each case, the density can be set independently of the area, and independently for each sheet, resulting in different sizes but a network that works about the same unless densities are set to really low values. See http://ioam.github.io/topographica/User_Manual/coords.html for more information.

youngjt commented 9 years ago

I think I have got your meaning.

what you want to express is that there are 47x47 ((47x47)x(1x1))neurons in the v1-layer,60x60((24x24)x(2.5x2.5)) neurons in the LGNON-layer,and 78x78((24x24)x(3.25x3.25)) neurons in the Retina-layers.

Thanks.

jbednar commented 9 years ago

Sure, though I would express it as 47x47 ((47x1)x(47x1)) neurons in the V1 layer, 60x60 ((24x2.5)x(24x2.5)) neurons in the LGNON layer, and 78x78 ((24x3.25)x(24x3.25)) neurons in the Retina layer. I.e., the matrix size is computed using a linear density, applied individually to the horizontal and vertical dimensions, not to the overall area. But that's just being picky. :-)

youngjt commented 9 years ago

Thanks.