ioam / topographica

A general-purpose neural simulator focusing on topographic maps.
topographica.org
BSD 3-Clause "New" or "Revised" License
53 stars 32 forks source link

Changes required to replicate results in my thesis except color #586

Closed Tobias-Fischer closed 10 years ago

Tobias-Fischer commented 10 years ago

This is a subset of the pull request discussed in https://github.com/ioam/topographica/pull/584. Everything related to color has been left out. Therefore, there are three changes left: 1) Extra buffer for patterns in case of motion simulations 2) Switch for input patterns (Gaussian/Images) 3) Gain control for spatial frequency

jlstevens commented 10 years ago

Ok. Much better! :-)

All looks good though I am still not entirely happy with the motion_buffer parameter. I get why you want to extend the bounds but I'm not sure whether it has to be a free parameter or whether it could be automatically computed somehow.

Tobias-Fischer commented 10 years ago

It probably can be computed from the pattern speed. In the notebook which I used for my simulations, I use motion_buffer=speed*3.0. I was reasoning about this back when I wrote this, but have no idea whether this reasoning was correct.

jlstevens commented 10 years ago

Any chance that the motion buffer should simply be speed * len(lags)?

Tobias-Fischer commented 10 years ago

Ah, yes, that might have been my reasoning. To be more general, it should be speed*(max(lags)+1) though I think. I wonder why I was using 3.0 rather than 4.0 though (as I used 4 lags) ...

jlstevens commented 10 years ago

Maybe a fence-post error (offset by one)?

lag0 <=gap0=> lag1 <=gap1=> lag2

I.e. There are 3 lags but two gaps (i.e. distances)?

jbednar commented 10 years ago

Overall, it sounds like this is ready to merge as soon as you address the lags/motion buffer issue, where replacing it with a formula does seem like it would eliminate the arbitrariness, and once the other two issues I raised are done.

Tobias-Fischer commented 10 years ago

Regarding the motion_buffer. If there is only one lag, the position_bounds should not be changed. As soon as there is a second lag, the position_bounds need to be extended by speed. Therefore, I will implement it in a way such that:

position_bound_x+=self.speed*max(self['lags'])
position_bound_y+=self.speed*max(self['lags'])

So if there is only one lag, max(self['lags']) will evaluate to 0, and the bounds will not be changed. That's also the reason why I multiply by 3.0 in my notebook, when 4 lags are used.

jbednar commented 10 years ago

Sounds good; max(self['lags']) should be three when num_lags==4, since the lags are 0,1,2,3. And from the lags code this looks like the right thing to do. On the other hand, I'm very confused about the lags code. Why is the number of lags being used to determine the reset period on the retina, which is the only link between lags and the input pattern? Even if that's a good rule of thumb, shouldn't the number of lags (a concept about anatomy) be decoupled formally from the reset period (a concept about the external world's properties), by making the reset_period be a real parameter defaulting to None that is calculated as num_lags if reset_period==None, but that otherwise the modeller can decide what it is? I can't see any reason other than a heuristic for why the reset_period and the number of lags should be related in this way, but maybe I'm missing something.

Tobias-Fischer commented 10 years ago

To me, the coupling makes sense. There is an onset of a new pattern (determined by reset_period) every n-th time step, whereas n is the number of lags. If reset_period<num_lags, the pattern seen by the different lags are not related to each other, which then would destroy the purpose of having lags. If reset_period>num_lags (probably a multiple of num_lags), the pattern moves out of the receptive field of neurons. So in theory, yes, one could decouple reset_period and num_lags, but in practice it makes very much sense to me.

jbednar commented 10 years ago

Well, one might want an input pattern to mostly fit into the receptive field of a given neuron, as it does its sweep, or one might want what any neuron sees to be a portion of a long trajectory starting before it can see it and ending after it. These two scenarios will have different patterns of long-range lateral correlation, and either one could be true of the world depending on different assumptions. So I argue that they should be decoupled since we can't be sure which of these scenarios is actually more realistic; seems like we'll need to test both of them. In any case, it's something that can be added later safely...