ctn-archive / nengo_theano

ABANDONED; see https://github.com/nengo/nengo instead
MIT License
3 stars 3 forks source link

Transform specifying encoded connections #15

Closed studywolf closed 11 years ago

studywolf commented 11 years ago

Being able to specify decoded - encoded connections and encoded - encoded connections using the transform matrix parameter of network.connect, and having it work with network arrays.

tbekolay commented 11 years ago

Sorry, could you explain the difference between the connection types? I see it in the code base but I haven't looked at it in enough detail to really understand it. Decoded is, like in Nengo, vector space stuff, I imagine. What are encoded connections? And why are learning connections different from decoded / encoded?

studywolf commented 11 years ago

Yeah, the pre = decoded -> post = encoded is the kind of connection that inhibitory connection used in nengo, where basically the encoders of the post population are overwritten with the transform you specify. The encoded - encoded would then be entirely specifying the weight matrix between two populations.

tbekolay commented 11 years ago

Oh, I see, so you keep the pre and post stuff separate. Pre and post meaning, essentially, origin and termination? Have we talked about using the terms origin and termination? Should we change it, now that we basically have free reign? It feels like there's quite a bit of effort taken with keeping these separate, whereas PyNN and the like seem to track "connections" rather than the two ends of the connection separately. I don't see why we can't do our NEF magic in a single connection object, but yeah, that's maybe a conversation entirely separate from this issue!

studywolf commented 11 years ago

Yup! And yeah I was wondering about that myself, for some reason origin and termination are unintuitive terms, I remember, when I was first learning it. But again you're right, probably best to discuss as another issue.

tbekolay commented 11 years ago

It was hard back in the day. Probably a good thing to talk about at the April sprint.

celiasmith commented 11 years ago

agreed... too bad 'pre' and 'post' aren't words (that you can, e.g., pluralize)...

celiasmith commented 11 years ago

I think the decoded-encoded and encoded-encoded terminology is confusing as well... would 'state space' and 'neuron space' work better i.e. state-neuron and neuron-neuron connections might make more sense.

tbekolay commented 11 years ago

Yeah, and they're pretty easy to confuse... I always thought 'input' and 'output' would work, but sometimes those can be ambiguous; 'afferent' and 'efferent' would be alright, if people didn't always confuse those too!

tbekolay commented 11 years ago

Whoops! Accidental closing.

drasmuss commented 11 years ago

I think we might want to call that something other than "encoded". Generally by "encoding" we mean taking an input value and passing it through the encoding vectors. In Nengo that happened in a DecodedTermination. What you are describing as "encoding" is what was associated with regular Terminations in Nengo, which we probably want a different term for.

Now I see that everyone just wrote the same thing, but I'm putting this in anyway!

studywolf commented 11 years ago

yeah maybe something like representational space and neural activity or neuron or spiking space, I would be hesitant to use the word state to describe the decoded signal just because it's so often used in describing the system state, state of the neurons, etc

tbekolay commented 11 years ago

Vector space and neuron space? Those seem pretty good, since we want people to understand what vectors are anyhow, and this will make it more clear when you're using which space.

celiasmith commented 11 years ago

true about 'state', i'm as worried about 'representation'... and think you can talk about both state representation and neural representation.... true enough about vectors and neurons, though. vector-neuron and neuron-neuron connections sounds good.

bptripp commented 11 years ago

I think that naming was my call with Nengo, so I'll explain it. I remember struggling a bit with the terms but I was surprised to hear that they had been confusing. Origin and Termination are standard terms in anatomy, and they are the correct terms to describe beginnings and ends of neuron fibre tracts. If I remember correctly I meant to make the terminology clear to experimental neuroscientists. I think at the time I thought there would be more experimentalist users.

Bryan


From: studywolf [notifications@github.com] Sent: Wednesday, March 20, 2013 1:40 PM To: ctn-waterloo/nef-py Subject: Re: [nef-py] Transform specifying encoded connections (#15)

Yup! And yeah I was wondering about that myself, for some reason origin and termination are unintuitive terms, I remember, when I was first learning it. But again you're right, probably best to discuss as another issue.

— Reply to this email directly or view it on GitHubhttps://github.com/ctn-waterloo/nef-py/issues/15#issuecomment-15191345.

bptripp commented 11 years ago

I like "neuron space", but I don't know about recycling "vector space". How about "thought space"? Too much?


From: Trevor Bekolay [notifications@github.com] Sent: Wednesday, March 20, 2013 1:50 PM To: ctn-waterloo/nef-py Subject: Re: [nef-py] Transform specifying encoded connections (#15)

Vector space and neuron space? Those seem pretty good, since we want people to understand what vectors are anyhow, and this will make it more clear when you're using which space.

— Reply to this email directly or view it on GitHubhttps://github.com/ctn-waterloo/nef-py/issues/15#issuecomment-15192042.

celiasmith commented 11 years ago

definitely too much...

bptripp commented 11 years ago

How about "brain space" and "mind space"?


From: Bryan Tripp Sent: Wednesday, March 20, 2013 2:06 PM To: ctn-waterloo/nef-py; ctn-waterloo/nef-py Subject: RE: [nef-py] Transform specifying encoded connections (#15)

I like "neuron space", but I don't know about recycling "vector space". How about "thought space"? Too much?


From: Trevor Bekolay [notifications@github.com] Sent: Wednesday, March 20, 2013 1:50 PM To: ctn-waterloo/nef-py Subject: Re: [nef-py] Transform specifying encoded connections (#15)

Vector space and neuron space? Those seem pretty good, since we want people to understand what vectors are anyhow, and this will make it more clear when you're using which space.

— Reply to this email directly or view it on GitHubhttps://github.com/ctn-waterloo/nef-py/issues/15#issuecomment-15192042.

celiasmith commented 11 years ago

Actually, that's a pretty good argument for origin and termination (which i haven't heard before)... and it's even true. "Both the cerebrospinal and sympathetic nerves have nuclei of origin (the somatic efferent and sympathetic efferent) as well as nuclei of termination (somatic afferent and sympathetic afferent) in the central nervous system".

tbekolay commented 11 years ago

Conscious connections and unconscious connections! :+1:

celiasmith commented 11 years ago

i'm still liking 'vector'... since we're not recycling it... it means the same thing here as in algebra.

studywolf commented 11 years ago

vector space makes a lot of sense to me, vector space and neuron space seems fairly intuitive.

anyways, back more to the issue of this thread,

for pre=vector, post=neuron connections, my thought is to allow both

and for pre=neuron, post=neuron, to just require explicitly

drasmuss commented 11 years ago

I like the idea of allowing both cases, but in the second case it might be better to have transform.shape = (post.array_size_post.neurons_num, pre.dimensions). I think that's probably a more standard way of describing a transformation matrix, rather than splitting it up into a third dimension. And the same for the pre=neuron post=neuron, that would be transform.shape = (post.array_size_post.neurons_num, pre.array_size*pre.neurons_num). Of course it doesn't really make much difference, just a matter of conventions, so I could easily be persuaded otherwise if there's a good reason to split it up.

studywolf commented 11 years ago

that makes sense to me, the code is all implemented such that ensembles are (array_size, neurons_num), which is where the (array_size, neurons_num, etc) transform specification comes from, but you're right it's more intuitive to have total_neurons_num. The former will be easier, and I don't think introduces any confusion or complexity, so I can start with that and then extend it to handle (i.e. reshape the transform) for the (post.array_size * post.neuron_num, ...) case