nengo / nengo

A Python library for creating and simulating large-scale brain models
https://www.nengo.ai/nengo
Other
821 stars 175 forks source link

Make it easy to introduce dimensions to increase sparsity #668

Open celiasmith opened 9 years ago

celiasmith commented 9 years ago

It'd be nice to introduce a 'sparsity' dimension in the neuron tuning curves if we want to control sparsity. Right now, we always have as many dimensions as things we're trying to compute over. However, we could introduce an extra dimension that interacts with one or more of these to make things sparse.

The analogy here is to getting 1D 'bump' tuning curves out of a 2D underlying space. If you started with that 1D space, and used normal tuning curves, they'd be broad... if you introduced a 2nd dimension (and the relevant mappings, etc. like in this circle case), you can make the resulting tuning curves sparser, and better at estimating sharply tuned and localish functions.

Travis has used this technique so he'd be a good one to talk to.

drasmuss commented 9 years ago

I think the best way to do this might be by making function representation easier (something like PR #552). Because really it's not so much the extra dimension that's doing the sparsity work, it's the function that makes that extra dimension interact with the other dimensions.

For example, in the 1D bump case, if you could do something like this:

def circ(a,b):
    return lambda x: [a*np.sin(x), b*np.cos(x)]
ens = FunctionEnsemble(100, circ, func_args=[Uniform(-1, 1),Uniform(-1, 1)])
input = nengo.Node(...)
nengo.Connection(input, ens, function=ens.encode)

that's really what we want (I'm pulling that syntax from the discussion in #552). And then we could provide some pre-built function spaces (like the circular coordinate transform) to make that even easier.