flatironinstitute / nemos-workshop-feb-2024

Materials for nemos workshop, Feb 2024
https://nemos-workshop-feb-2024.readthedocs.io/en/latest/
MIT License
0 stars 0 forks source link

Overall plan #1

Open billbrod opened 10 months ago

billbrod commented 10 months ago

Overall plan for contents we'll include

Concepts to cover

Introduction

We should have a page in our docs with the introduction: what's the format for the workshop, who are we, what do we want to get out of it, etc (and remember to present it too)

Tutorials

Conceptual intro

Sketch:

Current injection

Single cell current injection data set from Allen Institute

  1. No basis, just self-excitation
  2. Basis for self-excitation (introduce basis convolution)
  3. Current injection input (with basis)

Head direction

Head direction-tuned cells in thalamus, all with arrays

  1. Get initial tuning from pynapple
  2. Just coupling, can recover ring connectivity between cells, which can show they're tuned for direction
  3. Ridge regression and discuss interpretation
  4. extra: fit tuning explicitly with GLM. adding direction shouldn't change tuning much.

Spatial position

grid cells in MEC

  1. Initial tuning with pynapple
  2. Only coupling, show no position tuning recovered
  3. Introduce pytrees
  4. Add position input, (introduce basis evaluate), get tuning

Head direction + spatial position

Same dataset as above?

  1. Use all of head direction, spatial position, and coupling as inputs to GLM (using pytrees)
  2. Discuss importance of cross-validation and regularizers here

V1

Macaque V1 in response to white noise

  1. Use spike-triggered average to get receptive field
  2. Use receptive field to build more complex model matrix (1d filtered signal as input), 2 part procedure
  3. Show which cells are simple cells (based on STA output?)
  4. Coupling helps tuning curve sharpen? (This is what Pillow 2008 showed)

Auditory

gerbil primary auditory cortex, passive listening to pure tones, noise, and animal vocalizations

  1. also 2 part procedure, first preprocess input (split into frequency channels / cochleagram), then use GLM
  2. No basis on frequency input
  3. standard analysis: try to get tuning properties from pure tones. spectral-temporal receptive field (strf): response in Hertz by delay (msec).
  4. could then predict spiking in response to more complex stimuli. see if coupling improves performance? lateral connectivity?
  5. inter-trial interval is several seconds

Todo

billbrod commented 10 months ago

For binder:

gviejo commented 10 months ago

Data for head-direction : https://pynapple-org.github.io/pynapple/generated/gallery/tutorial_HD_dataset/


import numpy as np
import pynapple as nap
import requests, math, os
import tqdm

path = "Mouse32-140822.nwb"
if path not in os.listdir("."):
    r = requests.get(f"https://osf.io/jb2gd/download", stream=True)
    block_size = 1024*1024
    with open(path, 'wb') as f:
        for data in tqdm.tqdm(r.iter_content(block_size), unit='MB', unit_scale=True,
            total=math.ceil(int(r.headers.get('content-length', 0))//block_size)):
            f.write(data)

data = nap.load_file(path)  # Load the NWB file for this dataset
billbrod commented 10 months ago

also, should talk about GPUs