neurogears / vestibular-vr

Closed-loop VR setup for Rancz Lab
2 stars 0 forks source link

Way Forward #13

Open ederancz opened 1 year ago

ederancz commented 1 year ago

As discussed on 9 December, here is a list of the remaining current phase and future features and needs.

Current phase:

Future plans:

glopesdev commented 12 months ago

@ederancz We are working on finalizing the roadmap for the next phase and we came up with a few clarification questions we wanted to double-check with you:

Visualization and insertion workflow

For the visualization and insertion workflow, we were wondering if it is enough for you to get real-time feedback about the position of the motor, or if you also need some kind of trigonometric coordinate frame transformation to determine the position of the probe in cartesian coordinates relative to a landmark?

Benchmarking video wall

Can you share with us again the exact model of the video wall controller you have? Also just to confirm you will be using 2 ThorLabs photodiodes connected to the ONIX analog inputs, correct?

Stimulus bank

If you can give us a minimal list of which stimuli you would need right now to start experiments that would be very helpful.

For example, do you have anything in mind that is outside the built-in BonVision stimulus set? There is mention of a random dot stereogram (or is it kinetogram?) and also motion-illusion inducing patterns. If these are important to have right now it would be great to know what they are exactly (images or videos would help a lot).

Of course in the future we will always be able to add more stimulus types.

Task control logic

Aside from the stimulus bank we are assuming you will need some basic flow of task control, e.g. ITI > Stimulus Presentation > Reward Delivery, and maybe also manipulations like catch trials, omissions, probabilistic shuffled sets, etc. Do you have a schematic or description you could share with us of what a first protocol might look like?

Also there wasn't an explicit mention of closing the loop between stimulus presentation and mouse movement, but we are assuming you need some kind of real-time modulation of the stimulus? For these more basic 2D stimulus types it is sometimes not obvious what the effect of rotations should be for example, so if you could let us know what you had in mind for this that would be wonderful.

Load cell and lick sensors

How would you like us to proceed with the lick sensors? Do you already have some hardware available that we could try to use for the task workflow, or do you want us to allocate some time to look into this?

Re. load-cells do you need them for this phase or can we leave them aside for now?

ederancz commented 11 months ago

Hi @glopesdev. Apologies for the slow reply. I was away, and with the arrival of our animal license, experiments were starting to occupy all of my time.

Visualization and insertion workflow

For the visualization and insertion workflow, we were wondering if it is enough for you to get real-time feedback about the position of the motor, or if you also need some kind of trigonometric coordinate frame transformation to determine the position of the probe in cartesian coordinates relative to a landmark?

No transformation is needed.

Benchmarking video wall

Can you share with us again the exact model of the video wall controller you have? Also just to confirm you will be using 2 ThorLabs photodiodes connected to the ONIX analog inputs, correct?

It is a Matrox QuadHead2Go Q185. I will be ready to test early next week, need to hook up the photodiodes to the ONIX system.

Stimulus bank

If you can give us a minimal list of which stimuli you would need right now to start experiments that would be very helpful.

For example, do you have anything in mind that is outside the built-in BonVision stimulus set? There is mention of a random dot stereogram (or is it kinetogram?) and also motion-illusion inducing patterns. If these are important to have right now it would be great to know what they are exactly (images or videos would help a lot).

Of course in the future we will always be able to add more stimulus types.

Good question, we need to think this through. At this point, the important stimuli would be:

  1. Full-field flicker (black to white, at user defined frequency)
  2. Screen warping to control for visual field distortion du to flat panel monitors
  3. Drifting gratings driven by experimenter or driven by optical sensor
  4. Drifting random dots (no stereograms, but across the 4 screens) with pseudorandom dot size (not sure about the range, perhaps parameters to give smallest and largest dots with uniform distribution between). The coherence of motions should also be modifiable (say 10%, 50%, 90%, 100%). Again, driven by experimenter or driven by optical sensor.

The rest would come later.

Task control logic

Aside from the stimulus bank we are assuming you will need some basic flow of task control, e.g. ITI > Stimulus Presentation > Reward Delivery, and maybe also manipulations like catch trials, omissions, probabilistic shuffled sets, etc. Do you have a schematic or description you could share with us of what a first protocol might look like?

At this point we are focusing on innate behaviours, learned tasks will come later (~in a year or so). Thus the task control will be done more like a pre-determined sequence. I need to think a bit more about this.

Also there wasn't an explicit mention of closing the loop between stimulus presentation and mouse movement, but we are assuming you need some kind of real-time modulation of the stimulus? For these more basic 2D stimulus types it is sometimes not obvious what the effect of rotations should be for example, so if you could let us know what you had in mind for this that would be wonderful.

The general idea is to de-couple the visual stimulus from the motor rotation (either experimenter or animal driven) to generate sensory or sensory-motor prediction errors, respectively. Need to think about this a bit more together with the previous point.

Load cell and lick sensors

How would you like us to proceed with the lick sensors? Do you already have some hardware available that we could try to use for the task workflow, or do you want us to allocate some time to look into this?

Re. load-cells do you need them for this phase or can we leave them aside for now?

We are not going to do any learned (e.g. go/no-go) tasks, so the lick sensor is low priority. For the load cell, this is rather urgent to start, as I assume it will take some time and it would be important for us to compare ball-rotation vs head-movement-attempt-driven behaviours. The one I was thinking about implementing is https://www.cell.com/neuron/pdf/S0896-6273(19)30889-X.pdf The method parts contain (scarce) details of the implementation, but I can ask Jacob for a BOM. This info may be best in a dedicated issue.

RoboDoig commented 11 months ago

@ederancz just wanted to clarify a few points...

1) Where visual stimuli are driven by the optical sensor I assume this means the sensor on the treadmill and therefore stimuli are in closed-loop with animal movement. What would this movement drive on the stimuli? Speed of drifting / motion coherence?

2) For the load cell - in Jakob's paper they appear to use wireless radio communication to transmit the signal from the sensors. An alternative could be integrating the load cell into H1 along with the other sensors on the rotary platform with a wired connection. Would be good to know what we need to develop for this, e.g. if we use the same approach as the paper and all the same parts are available then we just need to receive the load sensor signal in Bonsai. For the latter approach we may need to do some H1 redesign and firmware updates, unless we can just get a straight analog signal from the sensor.

Me and Goncalo have been finalising the quote for this today and were wondering if it makes sense to split it up into two since there are some things still to decide on (e.g. visual stimuli). For example if we produce a quote for the urgent parts first (NewScale, flow sensor, load cell) and then fold the other tasks into a second quote as things become more definite.

ederancz commented 11 months ago

Hi @RoboDoig and @glopesdev.

  1. The optical sensor on the styrofoam ball will drive the speed of drifting gratings / motion clouds. The concept is similar to Georg Keller's experiments where the animal running creates visual flow via a virtual corridor and they introduce prediction errors by decoupling the visual flow from animal movement. There will be 3 conditions: 1. image is stationary and the animal rotates (driven by the experimenter or through the animal movement via the optical sensor). 2. animal is stationary and the image rotates (driven by the experimenter or through the animal movement via the optical sensor). 3. Animal and image rotates with experimenter set dynamics or a transformed version of animal movement (e.g. image rotates faster/slower/in opposite direction to the animal).

  2. As we have the slipring, I don't think we need wireless. Indeed, the H1 would be the best conduit. I think the first thing would be to get our hands on a load sensor and see what sort of output it gives when integrated into the head holder. This integration needs so prototyping on our part (btw, do you have a drawing of the sensor by any chance?). We want to use the load cell in two ways. 1. Quantify attempted head movements. 2. Drive the motor rotation with the load cell instead of the optical sensor.

I am fine with 2 quotes if that makes sense to you. However, admin here can be diabolical and doing it twice means twice the pain. Please add the ONIX recording and probe insertion workflows to the urgent one as well. We will not be able to place orders before the 4th of August.

@glopesdev, we are set up with the photodiodes to test the Matrox box, the ONIX breakout is receiving the analogue signals and I could read them in Bonsai. We only have 2, so will have to move them around a bit and test screens in pairs.

To reiterate, an insertion / recording workflow would be most urgent (in parallel with the integration of the new optical sensors), so we can start recording and troubleshooting from there.