Closed pintohutch closed 10 years ago
Hi,
Sorry for the delay; we've all been on holiday!
| Hi, I'd like to print and export a matrix of the activity values in | V1 after I present a new image to a trained network. However, when | I use the command "print topo.sim.V1.activity" it just gives me the | activity values of the last activity values before the test pattern | was presented. How do I get the activity values of a response to a | test pattern to the screen?
I may be confused about what you are asking, but Topographica should give you the activity pattern from the last test pattern, not the last one before the test pattern. E.g.:
$ ./topographica -a -i examples/tiny.ty topo_t000000.00_c1>>> t=0.25 ; pattern_present(inputs=pattern.Gaussian(x=t)) ; print(topo.sim.V1.activity) [[ 5.35945736e-05 3.35589957e-04 8.08961954e-04 1.11826712e-03 9.53610511e-04] [ 9.44181497e-03 6.52742918e-02 1.74568924e-01 2.03677861e-01 1.86573553e-01] [ 2.91880526e-02 1.27532456e-01 3.23690565e-01 4.71821986e-01 4.22181797e-01] [ 1.39493451e-02 6.80791280e-02 1.38813661e-01 2.18594368e-01 1.85366398e-01] [ 5.98260850e-05 2.18806385e-04 8.45110531e-04 1.19208255e-03 8.66909962e-04]]
topo_t000000.00_c2>>> t=-0.25 ; pattern_present(inputs=pattern.Gaussian(x=t)) ; print(topo.sim.V1.activity) [[ 7.61635683e-04 1.08500702e-03 6.21364402e-04 2.62502786e-04 6.50891133e-05] [ 1.59891915e-01 2.14431067e-01 1.47899326e-01 6.26704207e-02 1.37908217e-02] [ 3.94245791e-01 4.85562415e-01 2.84251445e-01 1.30145384e-01 2.94790552e-02] [ 2.05932032e-01 2.31267962e-01 1.23264630e-01 6.55479856e-02 1.25266141e-02] [ 8.31040462e-04 8.69487329e-04 9.09730053e-04 2.94651006e-04 6.11171160e-05]]
If you replace "-a" with "-g" and "print" with "matrixplot" you can plot the results instead, which will more clearly show that it's printing the results of the last pattern_present presentation.
| Additionally, I'm using a separate package to export these matrices | to files on the disk, however is there anything built in that I | could use also? Thanks.
Jean-Luc can probably suggest a good approach for exporting activity matrices.
Jim
Hi,
Activity matrices are Numpy arrays and there are many ways to save output to disk. Using a third-party packages is the usual approach as Topographica does not implement anything specific for exporting this kind of data.
Numpy export will always be the simplest option. After loading tiny.ty you could run the following commands, for example:
# Saving 'V1act.npy' file
numpy.save('V1act.npy', topo.sim.V1.activity)
# Loading 'V1act.npy' file
v1activity = numpy.load('V1act.npy')
When I want to save multiple activities into a single file (e.g. across a run), I typically use numpy.dstack. This lets me stack the two dimensional arrays into a three dimensional array. For instance:
accumulator = [ ]
for i in range(10):
topo.sim.run(1)
accumulator.append(topo.sim.V1.activity.copy())
dstacked_array = numpy.dstack(accumulator)
You can then export this array as you wish e.g. using numpy.save, numpy.savez (more flexible), numpy.savetxt or anything else that suits you needs! There are always more elaborate packages such as PyTables (http://www.pytables.org/) which help you manage the storage of large numbers of numpy arrays.
Hope that helps.
Jean-Luc
Well I'm trying to do this with a .typ file I've trained (10,000 iteration GCAL). When I try the very first command in opening it in topographic, I get this error message:
Traceback (most recent call last):
Fille "usr/local/bin/topographica", line 18, in
Essentially, does this only work for .ty and not .typ files?
Then I tried doing the test pattern printing graphically with tiny.ty, and I printed the V1 activity with no test pattern presented (and got all 0's). Then I presented a test pattern and hit present, saw the change in activity in the activity window, and ran the same print activity command and got all 0's again. So I'm missing something here.
My goal is to train a model with given parameters over some number of iterations (10,000 here), and then expose (gaussian white noise) test patterns to it over many time steps and get the activities of the V1 neurons saved to disk.
I'm not sure what commands you're running here. From the error message, it looks like you've supplied a .typ file on the command line. Topographica tries to execute files you supply on the command line as Python code; .ty and .py files are Python code, so those work fine. A .typ file is a saved snapshot (a Python pickle), not code (try loading it into a text editor to see), and has to be loaded with the "load_snapshot" command. But once the snapshot has been loaded, the network should work the same as having run the .ty file.
For test patterns, it sounds like you've been talking about the GUI Test Pattern window, and I've been talking about the pattern_present command. The Test Pattern window calls pattern_present, but before it does so it saves the current state of activity, and afterwards it restores the state (other than the activity window plots), so that it's as if the testing has been done in a parallel universe. We could add an option to the Test Pattern window to disable that behavior, but for your use case it sounds to me like you should be using the pattern_present command directly anyway -- don't you eventually want a command that you can run that presents the patterns and saves the results to disk, all in one operation, rather than making the user do something in the GUI?
Ok great. I am currently trying to test a trained network's response to white noise over many iterations automatically. So this is tremendously helpful. Thanks again
from topo import pattern ; import topo.pattern.random ;
pattern.random.GaussianRandom()
Got it. And in terms of timing. By default, I see that pattern_present presents an image for duration=1.0. In terms of a realtime approximation, how long is this image being exposed to the retina for? In other words, what's the overall ratio of timesteps to real time in topographica? Dr. Bednar's paper on "Building a mechanistic model of the development and function of the primary visual cortex" states that a \delta t of 0.05 corresponds to roughly 10-20 ms, however is there a more accurate ratio that I can rely on?
How Topographica's simulation time relates to real time is entirely up to the modeller. In that paper and in most of my own work, I have only very rough calibration to any time scale, based solely on the assumption that 1 GCAL or LISSOM iteration = 1.0 Topographica time units = 1 average visual fixation duration between saccades = approximately 200 ms. The idea is that in these models the retinal input is kept constant while the thalamic and cortical activity settles, and then the retinal input changes, which is all intended to correspond to a single visual fixation, and then the image changes after a saccade; saccades occur about every 200ms or so in people. But that's an extremely rough way to match time! It's up to the modeller to decide for any particular model what any of the discrete temporal durations in Topographica represent in the real world, so you can make a different assumption if you like.
Jean-Luc had another approach in his 2011 MSc thesis (http://www.inf.ed.ac.uk/publications/thesis/online/IT111096.pdf); there he calibrated very precisely against PSTH data from LGN and V1 neurons, in order to have a very close match to the temporal response profile observed in recordings, with millisecond accuracy. In his simulations he will have a very different way to calibrate time, one that's much more neurophysiologically grounded, but those simulations take hundreds of times longer to run.
Gotchya. I want to test my trained model with white noise (both spatially and time variant) and white noise must not be bandlimited, and thus always changing on every time step (say dt=0.05 in this case). I am trying to do this with pattern_present, however the presentation time of 0.05 isn't long enough to allow for propagation up to V1 activity. What's the best way to present my model with a new gaussian random white noise image every dt = 0.05? Using pattern_present doesn't quite work in this case as it presents the pattern for 0.05 and then goes back to the input generator's default when topo.sim. is ran. I tried also modifying the input generator (and disabling plasticity) to test my train model on white noise, however, I am unable to modify the period parameter from 1.0 to 0.05 (its greyed out). Any advice? Thanks
Ah, that makes sense, though your input will always be bandlimited, both spatially (given the finite and discrete density of the retina) and temporally (given the nonzero time step duration of the simulation).
At the moment there isn't a good way to use pattern_present for this use case. I think the same issue applies to a new model of motion selectivity that Jean-Luc is working on, and his cleaner approach to that problem should make it feasible for us to add support for time-varying patterns in pattern_present. But that won't be for a while yet.
Conversely, temporarily changing the period of the GeneratorSheet is not allowed because of how it generates a stream of events triggering pattern generation. If you want to look at that class and see if there is a clean way to support changing the period, then I'd be happy to accept changes for how GeneratorSheet works. One way would be to make the period be a Python property, so that whenever it changes GeneratorSheet would delete the PeriodicEventSequence that's been enqueued with the old period and enqueue a new one with the new period. Making a Parameter a property might be tricky, though; I can't remember if we've ever done that or if it's even possible. But it may be possible to have it stop being a Parameter and still work as it always has.
To avoid these complexities, one thing we've done in similar circumstances is to add a separate "fake" GeneratorSheet that we connect for the purposes of doing the measurement, present patterns to it, and then delete or disable it when done. That doesn't require any changes to GeneratorSheet, but is of course awkward to do, so it would be nice to make one of the other approaches work.
Jim
BTW, are you really sure it matters whether the input changes every 0.05 timesteps? If you simply wait until about e.g. 0.15 when only a single white-noise pattern has reached V1, then can't you simply take the response as the response to that particular white-noise pattern? It wouldn't then matter that the subcortical processing was still using the same pattern, since V1 won't ever see any results of that processing. Just be sure you really know what you want here...
Well, there's some confusion on my part about this. So, I'm taking my gcal_10000.typ model I trained from the examples via the GCAL tutorial, and I want to test the model with spatial-temporal white noise, and record V1's response, along with the stimulus at every single time step. I thought the same thing about using pattern_present for 0.15 at a time, however when I do that, I notice that the V1 activity never changes (tested using print topo.sim.V1.activity after every time I use pattern_present), and LGN doesn't react every time either (in fact, it appears it reacts exactly every other pattern_present command), while the retina changes every time (as expected). So this is one confusion for me. I suppose I'm confused if the activity in the various layers resets or not between commands?
I also notice that if I step (by dt) from time 10,000 to 10,000.05 and try pattern_presenting noise (after I've tried multiple pattern_presents of 0.15 at t = 10,000), the residual activity from my pattern_present of white noise is still evident on the LGN. Does this mean the layers do not reset between pattern_present commands? Additionally, I notice that I cannot present a new pattern at 10,000.05 via pattern_present to the retina, does this mean the pattern_generator just takes over in between visual saccades (when t is not an even integer)?
In regards to overall real-time values of topographica's units. I understand that the real-time equivalent is determined by the modeler, as the parameters for map development are adjusted in a way that would reflect how the visual system reacts to stimuli for that timescale. In this way, is there a work-around where I can model the system such that I can use a standard 1.0 period pattern generator to test the system just with 1.0 representing a smaller time scale (temporarily changing white-noise at around 2 ms instead of 200 ms)? Again, I appreciate the great feedback here. Thanks again.
I suppose I don't understand the functionality of the GCAL model that well. If I single step through gcal.ty from the examples from t=0, I notice that a new image is presented to the retina at t = 0.05 (dt), that new image is given to the LGN layer at t = 0.10, the activity then dies out a little bit in the LGN layer at t=0.15, then finally the V1 layer is updated at t=0.2. This activity in V1 then settles until t = 1.2 (0.15 s after a new input is presented to the retina), and all activity is forced to 0, then the process repeats. Why is the V1 activity being forced to reset here? I would like to calculate the next V1 activity values always from the previous, as opposed to resetting the system every 1.0 training iteration. Is this not a practical thing to do?
I'm trying to keep track of the firing rate in V1 over time given a white noise stimulus. Then, through reverse correlation, I'm hoping to compute a spatiotemporal receptive field of the overall system using reverse correlation. I cannot get a temporal RF without a continuous-behaving V1. Or am I missing something in my approach?
using pattern_present for 0.15 at a time, however when I do that, I notice that the V1 activity never changes (tested using print topo.sim.V1.activity after every time I use pattern_present), and LGN doesn't react every time either (in fact, it appears it reacts exactly every other pattern_present command), while the retina changes every time (as expected). I suppose I'm confused if the activity in the various layers resets or not between commands?
In the GUI, the activity is forced to reset between Test Pattern presentations, but pattern_present in scripts or the commandline does not do anything like that, to allow users to analyze the results of presentations.
To understand what's going on when you present for 0.15, we'd have to see the transcript of the commands you used.
I also notice that if I step (by dt) from time 10,000 to 10,000.05 and try pattern_presenting noise (after I've tried multiple pattern_presents of 0.15 at t = 10,000), the residual activity from my pattern_present of white noise is still evident on the LGN. Does this mean the layers do not reset between pattern_present commands?
In GCAL, the V1 activity is reset at every integer value of time, i.e. 1.0, 2.0, etc. This behavior is controlled by the LISSOM sheet class (now called SettlingCFSheet in the Git repository, to avoid confusion with the LISSOM algorithm). This resetting is independent of pattern_present; it's just something that the network always does, whether during training or during test patterns, and is a good reason never to try pattern_present starting from a non-integer time multiple with a network like this.
Additionally, I notice that I cannot present a new pattern at 10,000.05 via pattern_present to the retina, does this mean the pattern_generator just takes over in between visual saccades (when t is not an even integer)?
If I understand you correctly, then yes.
In regards to overall real-time values of topographica's units. I understand that the real-time equivalent is determined by the modeler, as the parameters for map development are adjusted in a way that would reflect how the visual system reacts to stimuli for that timescale. In this way, is there a work-around where I can model the system such that I can use a standard 1.0 period pattern generator to test the system just with 1.0 representing a smaller time scale (temporarily changing white-noise at around 2 ms instead of 200 ms)?
As the modeller, you can do whatever you like. :-) You are free to say that 1.0 time units is 2 ms instead of 200 ms. But see below.
I suppose I don't understand the functionality of the GCAL model that well. If I single step through gcal.ty from the examples from t=0, I notice that a new image is presented to the retina at t = 0.05 (dt), that new image is given to the LGN layer at t = 0.10, the activity then dies out a little bit in the LGN layer at t=0.15, then finally the V1 layer is updated at t=0.2. This activity in V1 then settles until t = 1.2 (0.15 s after a new input is presented to the retina),
Strictly speaking, it settles for up to 16 steps, regardless of the new input, which is reached before t=1.0. As implemented in SettlingCFSheet, the sheet simply counts how many times it has been activated since the last input, and stops when it gets to 16, waiting for a new input. No further settling is done.
A new input then arrives at t=1.2 and all activity is forced to 0, then the process repeats. Why is the V1 activity being forced to reset here?
The reset is used because GCAL is modelling processes that take place over several weeks, and is effectively processing an snapshot of activity at one particular time, then skipping forward in time to another (independent) visual input later, processing that one, and so on. Thus GCAL does not have a continuous notion of time. That way we can just take a database of images and select an input at random; otherwise we would need a complete spatiotemporal history of the input pattern stream, e.g. as a continuous video of the experience of an animal. Needless to say, no one has that type of video data, though we are working on collecting it.
I would like to calculate the next V1 activity values always from the previous, as opposed to resetting the system every 1.0 training iteration. Is this not a practical thing to do?
I'm trying to keep track of the firing rate in V1 over time given a white noise stimulus. Then, through reverse correlation, I'm hoping to compute a spatiotemporal receptive field of the overall system using reverse correlation. I cannot get a temporal RF without a continuous-behaving V1. Or am I missing something in my approach?
Aha! Now I see what you're up to! Yes, you're using a approach that is not suitable for GCAL or LISSOM, or at least would be very difficult to interpret. Because GCAL and LISSOM have no continuous time, it's not meaningful to try to get a spatiotemporal RF out of them. Built into both GCAL and LISSOM is the assumption that the input remains constant for 1.0 time units, during which the cortex settles, and so it's not meaningful to present inputs any more frequently than that.
What you probably want is TCAL, which is a modification of GCAL to use continuous time. TCAL uses GCAL to self-organize the network, to model developmental processes over several weeks, but then changes the timestep to a very small value to allow instantaneous inputs, and also removes the resetting to allow continuous inputs. The result can be calibrated closely against impulse responses in LGN and V1, and it should be possible to construct spatiotemporal RFs for TCAL just as you describe. Fully unifying TCAL and GCAL is something Jean-Luc Stevens is working on, i.e. running at millisecond resolution over the full process of weeks of development, but of course is quite daunting computationally and requires the types of inputs described above. Meanwhile, the existing TCAL does allow proper continuous time inputs and analysis, for the adult (fully self-organized) network.
Jim
Hi,
(a) The simplest thing to do is leave the GeneratorSheet (i.e. the retinal sheet) with the default period value of 1.0. Then you can have a Topographica simulation time of 1.0 equating to one millisecond. You then need to ensure the pattern presented to the retina in updated at the right rate (e.g. held constant over multiple periods) which can be achieved by using set_dynamic_time_fn on the appropriate parameters. This lets you specify how often dynamic parameters update the patterns drawn on the retina change in relation to topo.sim.time().
(b) To get smooth PSTH profiles, I use a hysteresis output function (transferfn.Hysteresis) with time constants of 0.03 in the LGN sheet and 0.02 in the V1 sheet. These values are appropriate for a 'saccade' lasting 250 milliseconds - i.e every 250 Topographica time units using the convention defined above.
(c) For this type of simulation, you'll want to use JointNormalizingCFSheet_Continuous sheets. These sheets don't implicitly try to settle activity and don't reset activity at regular intervals.
(d) It is important to adjust the delays of all the connections in relation to the chosen GeneratorSheet period. For instance, you probably want lateral connections in V1 to have a delay matching the retinal period (so lateral settling doesn't happen more often than afferent input is received). To make the simulation as efficient as possible, delays should all be multiples of whatever period is chosen for the GeneratorSheet. This helps avoid unnecessary events and keeps the behaviour of the model nicely clocked.
These are the essential steps needed to get a clocked simulation model like TCAL and you can find further details of how to get a clocked simulation in my Masters thesis (see Jim's link above). You can probably ignore any details regarding the VSD signal model or lateral propagation details (so far, we haven't found any effect associated with them). You may wish to have a look at the current TCAL model definition (contrib /JLStevens-TCAL /TCAL.ty in the svn-history repository) but you may find that file more confusing than helpful!
Hope that helps.
Jean-Luc
Are you saying the above for training the model and testing it? I'm a little confused. I've been combing through the TCAL source code to figure out what's going on. I see how you'd want the generator sheet to hold the image constant over several iterations with GCAL training, but when it comes to testing with spatiotemporal white noise, the generator sheet should be updating every iteration. How can I use the TCAL model to hold the input for multiple periods during training, then change the input every period during testing? (While still being able to record V1 activity at every stage of presentation).
Hi,
Sorry for the confusion! I mentioned holding the stimuli constant for training purposes but TCAL isn't ready for training just yet. Instead you'll need to load the weights from GCAL.
I've now got TCAL running using GCAL weights and a white noise stimulus. You'll need to pull the latest version of Topographica off Github a bug that needed fixing. Full details in the README (in contrib/JLStevens-TCAL/ of the svn-history repository) but the following instructions should contain everything you need to get it running:
Run gcal.ty in the examples directory and train the weights for 10000 iterations:
../topographica -g gcal.ty
Dump the weights out of GCAL for use by TCAL as follows. The weights will be saved in the folder given by topo.param.normalize_path():
from distanceDelays import pickleGCALWeight pickleGCALWeight()
Exit the GCAL simulation and now run TCAL (with the GUI):
../topographica -g TCAL.ty
At the Topographica prompt:
white_noise = generator=pattern.random.UniformRandom(scale=100) topo.command.pattern_present({'Retina':white_noise}, overwrite_previous=True, duration=0.0)
topo.sim.state_push()
and topo.sim.state_pop()
This version of TCAL.ty file can be simplified and I should have a cleaner version (that I'll make available for you to look at) in a few days.
Hope that works!
Jean-Luc
The last comment is over a year old. I'll assume the problem has been resolved and shall now close this issue. Please feel free to open a new issue if you have any new questions!
Hi, I'd like to print and export a matrix of the activity values in V1 after I present a new image to a trained network. However, when I use the command "print topo.sim.V1.activity" it just gives me the activity values of the last activity values before the test pattern was presented. How do I get the activity values of a response to a test pattern to the screen? Additionally, I'm using a separate package to export these matrices to files on the disk, however is there anything built in that I could use also? Thanks.