Closed james-isbister closed 8 years ago
That's a good question! The original implementation is still available:
http://nn.cs.utexas.edu/?lissom
But getting it to compile would require a virtual machine from a very old Linux distribution, because at the time that code was written, the support for the "new" (at the time) C++ standard was very inconsistent, needing lots of hacks and workarounds. Eventually compilers caught up, but the old codebase that was acceptable at that time won't compile with any modern compiler. In any case that code is extremely limited and hard to use compared to Topographica.
I don't think it would be all that difficult to implement GLISSOM in Topographica, but no one ever got around to doing so. The reason is that GLISSOM relies heavily on an optimization that we no longer use, because we began simulating larger networks composed of more interconnected sheets, and that optimization is no longer safe to do in that scenario. Specifically, the C++ above uses an optimization that assumes that activity bubbles will always shrink after a brief initial period when they are responding to a new input pattern, and it thus doesn't simulate any neuron that wasn't initially activated in the first 2 or 3 settling steps. With sparse input patterns (also something we rarely use anymore), only a very few neurons will be active at that point, and so we can simulate the remaining 15 or 20 settling steps extremely cheaply. GLISSOM let us use a small network during the initial self-organization, during which this sparse structure emerges, and then a larger network only later, giving huge speedups. Nowadays we have to simulate the full network the whole time, since we don't know if our highly recurrent network will indeed end up forming sparse bubbles (as it isn't always guaranteed to do so), and thus there's no longer this massive difference between the initial cpu-time-per-iteration and the later ones to exploit, as was shown in the GLISSOM paper.
So GLISSOM could be implemented in Topographica, but it wouldn't be useful for our own work, where we are focusing on speedups mainly for their ability to help simulate larger or more complex networks, and GLISSOM won't help as much in that case. Feel free to implement it and submit your changes as a pull request!
If what you're asking about is specifically the scaling portion of the GLISSOM algorithm, I'd suggest using https://github.com/EconForge/interpolation.py. I think what I did in GLISSOM was just bilinear interpolation, though I was careful to do it in a way that really respected the underlying array coordinates, unlike many of the implementations in image-processing programs, which very often end up translating, dilating, or expanding the image by 0.5 or 1 pixel accidentally (which no one seems to notice for large images, but which is very obvious for small sets of neural weights!).
Thank you for such a detailed answer.
I think I will see what sort of training time I get with the PyCUDA CFSheets first. I might look into implementing the interpolation scaling approach further down the line if necessary so thank you very much for the insight and links.
Many thanks
Hi,
Is there an implementation or example of the GLISSOM scaling algorithm available?
Many thanks, James