flatironinstitute / nemos-workshop-feb-2024

Materials for nemos workshop, Feb 2024
https://nemos-workshop-feb-2024.readthedocs.io/en/latest/
MIT License
0 stars 0 forks source link

Post-workshop notes #24

Open billbrod opened 7 months ago

billbrod commented 7 months ago

Logistics

Billy

Jessica/Matthew

Theory presentation:

Edoardo

Pynapple:

Edoardo

Current injection:

Edoardo

Head direction

Billy

V1

Billy

miscellaneous

EricThomson commented 7 months ago

Side note on white noise as a stimulus: Pillow had a really good explanation (probably following EJ's 2001 paper) -- if you use the Gaussian GLM the analytical MLE solution is the covariance matrix times the STA. With white noise the covariance matrix disappears, so the STA is the answer. 😄 But as Edoardo pointed out yesterday, this falls apart with the NLP model which has no analytical solution so we're just solving numerically. which nemos does. Maybe white noise isn't as important with NLP but is still used for historical reasons (I'm speculating I may well be wrong: white noise has nice mathematical properties outside of just GLM contexts).

EricThomson commented 7 months ago

Congratulations on a great workshop! âš¡ Thanks for letting me participate.

As a newcomer to the world of GLMs I have some ideas about how to introduce/represent GLM for a lay audience partly because so many things are in my head from neuromatch and watching Pillow's intro Cosyne lecture.

I think finding a "canonical circuit" GLM model diagram that you use consistently, with the same jargon sort of enforced, and looping back to it throughout the day/repo/demos would be really helpful. This could provide continuity and cognitive latches for people. "Here we are going to cover this region of the canonical GLM model diagram. And this parameter set from this region will now be fit. blah blah"

Even simple things like having consistent jargon for the nonlinearities. Sometimes it is the static nonlinearity (the exp), sometimes it seemed to be something else (the basis functions), and having jargon for these different regions of the model and doggedly sticking to it would be helpful. Obviously some flexibility that reflects flexibility in the field is fine (and acknowledging this is good _e.g., here's our name for this, here are some other names used in the literature).

Sam's talk was great, and provided what I think is a good initial canonical GLM "circuit diagram":

glm_model

Basically you have an inner product of input and some weights inside the first box. Most people understand this from their study of basic neural networks, I think 😄 Or at least it is easy to explain.

There is your core block GLM model you already have it.

And then add wrinkles to it.

First, add the spike history filter. Just add a loop from spike outputs right back into the model itself (wherever it goes).They get that to capture data accurately, you can't ignore history of spikes (because refractory periods etc).

Second, add the coupling filter. Again, physiologists will easily understand that you have lots of neurons, and to undersstand and model data accurately you need to capture neuron to neuron interaction effects.

So by then you have a picture like this (From Greg Field 2020 Nature Communications):

field_2020_glm_cartoon

This is something they should be able to understand.

Finally, add the basis functions and why you might need them (this is a complex topic that is inherently mathematical and harder to understand). I'll add a separate post about that

And as Eero said, splitting up nbs and doing one at a time. Now we are going to just fit the canonical GLM. Now we are adding a spike history filter. Now we are adding a basis function. Now we are adding coupling filter (you get the idea). And revisit your canonical diagram, with canonical jargon, each step of the way. And just clearly specifying what parameter set is being fit from that diagram.

One thing I don't love about the above diagram is that there are just arrows feeding coupling and spike history filter. Is that addition? Anyway. You get the gist.

Anyway, sorry this is so long and sort of rambling, please let me know if it is unclear . As I said the talks were great and super helpful: I'm just sort of brainstorming here about how things might be dialed in a bit more for a neurophys audience. I know it is incredibly hard I struggle with presenting calcium imaging models to experimentalists and am constantly changing my approach. Again don't take this as negative criticism it was an awesome workshop, I'm just brainstorming!!!

Please let me know if you want me to look over any materials/diagrams before the next workshop. Now that it is over I feel I can start to help and contribute some: I purposely have held off on looking at nemos. 😄

EricThomson commented 7 months ago

Some thoughts about the basis function material. I found this a bit challenging, so will try to explain, and also make a couple of suggestions.

My understanding is that with the introduction of basis functions you are initially tweaking the first box (the linear combination box): changing the inner product from <w,x> to <w, bi(xi)> (where b is a basis function: I'd write it out properly as a sum with LaTex if I wasn't lazy 🤣 ), so in the initial case you are basically just tweaking the canonical GLM circuit, right? So you could introduce it as part of a tweak of the core GLM "circuit" model.

IOW you are still doing an inner product, and things are still linear in the w parameters but you can warp the stimulus x with any nonlinearity in b() you want using the basis functions (is this right? or are there limits on b() functions do they need to be invertible or something I don't know (I know they aren't invertible -- maybe finite energy or something? I know you said they aren't orthogonal it seems they are not that restricted). The point is the basis transform is part of the convex magic right -- intuitively I'd want to mess with the linking function and leave the inner product alone but you can actually do cool things with the basis functions, transform the stimulus space, and the whole problem still remains convex, and this gives way more power than if you just tweaked the linking function while leaving the inner product alone).

Assuming this is all correct, then the basis can be spatial, temporal, or spatiotemporal: it just depends on the input vector and the basis function you are using. These are things that they don't talk about in the other introductory tutorials online, and people coming in may need to be walked through this gently (also if basis is spatiotemporal do they have to be separable? I think Billy suggested yes? Not sure you would need to say this, but maybe as a side or parenthetical thing you could mention it "for the computationalists" or whatever).

At any rate, the basis function stuff is more complex than the other topics, so I would suggest adding it last, and adding it a bit more slowly and incrementally.

Also, correct me please if I'm wrong but you said you also have basis functions for the spike history filters as well as the coupling filters? I would treat these as separate topics and again, explain: can your basis functions be selected separately from the basis functions used in the initial linear combination box (we really need a name for that initial linear combination box). and how does that work exactly?

I assume it is roughly the same, but it should be spelled out more explicitly because a linear combination of an input is pretty easy for people to understand, but a linear combination of a poisson spiking output, feeding back onto the model itself? You have just moved up a level of abstraction and I would suggest slowing way down and explaining, unpacking, and going into more detail about what that means, how much control users have in selection of the basis, what properties the basis functions have in that case, compared to the inputs case (and with spiking output it will just be restricted to temporal because spikes).

[Annoying aside: when basis functions are introduced consider showing a little gallery of basis functions available in nemos with a picture of a canonical set of them.]

I assume whatever you spell out wrt the autaptic filter will apply to the coupling filter between GLMs. So once you have really spelled out the details of the self-coupling spike history filter and the basis function for that, then you will be able to generalize that fairly straightforwardly to the between-neuron case. But I would suggest don't do that quickly either this is a lot of information to take in.

As before, I'm brainstorming just throwing out my thoughts. Workshop was great I'm a first-time GLM person excited to start looking at spiking data again at some point! Great job I hope these comments make sense, please let me know if anything is unclear!

EricThomson commented 7 months ago

Minor note: I recommend schedule a group photo for every workshop. 😄