neurogears / vestibular-vr

Closed-loop VR setup for Rancz Lab
2 stars 0 forks source link

Workflow specifications #49

Closed RoboDoig closed 2 months ago

RoboDoig commented 3 months ago

@EleonoraAmbrad @ederancz Please use this issue to upload detailed specs for final experiment workflows.

EleonoraAmbrad commented 3 months ago
EleonoraAmbrad commented 3 months ago

when using the workflow "TrialLogicTest" I ran into two behavior I did not understand:

RoboDoig commented 2 months ago

when using the workflow "TrialLogicTest" I ran into two behavior I did not understand:

* does the workflow stop after a few number of halts have been reached no matter the block length? This happened when setting halt probability to >0. Also, when this happens, the next commands are ignored (i.e. the blocks do not repeat even if specified in the json file)

* I am not sure the block gain modifier in the json file is working, the global gain seems to be the only one taken into account?

The workflow should not stop after a certain number of halts, the only thing that should stop the block is the block length timer if I recall correctly. Is it always stopping after a certain number of halts? The block gain modifier is definitely implemented, it should multiply the global gain

RoboDoig commented 2 months ago

Some comments on the workflow specification that I'd like to clarify:

Finally, I think it would be ideal if we could combine both the linear flow and rotating drum environments into a single workflow. We could control the experiment type with block definitions as we do now.

EleonoraAmbrad commented 2 months ago

when using the workflow "TrialLogicTest" I ran into two behavior I did not understand:

* does the workflow stop after a few number of halts have been reached no matter the block length? This happened when setting halt probability to >0. Also, when this happens, the next commands are ignored (i.e. the blocks do not repeat even if specified in the json file)

* I am not sure the block gain modifier in the json file is working, the global gain seems to be the only one taken into account?

The workflow should not stop after a certain number of halts, the only thing that should stop the block is the block length timer if I recall correctly. Is it always stopping after a certain number of halts? The block gain modifier is definitely implemented, it should multiply the global gain

I had the impression that the workflow stopped too early after setting halt probability to >0. For sure, the blocks that should come after do not start when this happens

RoboDoig commented 2 months ago

Could you send me an example SessionSettings file that produces this behavior? I will investigate

EleonoraAmbrad commented 2 months ago

Some comments on the workflow specification that I'd like to clarify:

  • "The rotation of the VR should be coupled to the motor movement..." - as I understand it we should therefore couple VR movement to the rotary encoder?

yes, that is what we thought. It would be analogous to reading out the PMW and translating it to VR motion, but for the rotation.

  • Will playback just be playback of the visual stimulus, or the motor movement as well?

Just the VR. We would like to be able to control motor movement "freely" as well, but that is a separate issue

  • "Motion cloud - link to python configuration script": To be clear, at the moment the motion clouds are generated via a python script as static texture sequences and loaded as a texture movie in Bonsai. Does this need to change? I think we had agreed that it was better to pre-generate the texture sequences rather than having them load dynamically.

no, this does not need to change, sorry, I formulated it wrong here. The way it'S working now is good!

  • Algorithm 2 is greyed out in the experimental design document, is it required?

no, it was a suggestion on how to implement the workflow, but does not necessarily need implementation

Finally, I think it would be ideal if we could combine both the linear flow and rotating drum environments into a single workflow. We could control the experiment type with block definitions as we do now.

this would be great, I think it would work well. Just to be clear, we would also like to display rotating visual stimuli passively and rotate the motor at a certain speed and gain (this should be mentioned in the new pdf, but I wanted to make sure it's clear again :D )

thanks a lot!

RoboDoig commented 2 months ago

Thanks @EleonoraAmbrad that makes things much clearer. Also as you say, the playback condition needs to still be implemented for both the linear and drum environments.

There is one distinction I still need to clarify: "It would be analogous to reading out the PMW and translating it to VR motion, but for the rotation." - We can link the VR movement to the PWM that is sent to the motor controller but ultimately this is calculated directly from the flow sensor reading. The alternative is that we read out the rotary encoder signal that is produced as a result of the motor movement and use that to drive the VR motion.

RoboDoig commented 2 months ago

I also have a question about the relationship between the four types of block. We have:

As I understand it there are really only two conditions - closed-loop and open-loop. A mismatch block is always subset of a closed-loop block (where halt probaility > 0). A playback block is always a subset of an open-loop block. Is that correct?

RoboDoig commented 2 months ago

I think in fact all these conditions can be reduced to a single block type with different gain settings:

Linear, closed-loop:

Linear, playback:

Drum, closed-loop:

Drum, playback:

This way you are also not as limited in choosing 1 of 4 block types. You can still construct the original types but also do things in between (e.g. have a drum playback block that also has some closed-loop rotary motion).

RoboDoig commented 2 months ago

Another thing to clarify:

"Rotation probability: probability of the played back or generated flow stopping at each given time" - this suggests that there should be defined times for halting playback? Is this true or is it just with the randomisation parameters as implemented now?

EleonoraAmbrad commented 2 months ago

I think in fact all these conditions can be reduced to a single block type with different gain settings:

Linear, closed-loop:

  • flow-->vr gain > 0 (or < 0)
  • flow-->motor gain = 0
  • rotary-->vr gain = 0
  • playback-->vr gain = 0
  • (+/-) mismatch parameters

Linear, playback:

  • flow-->vr gain = 0
  • flow-->motor gain = 0
  • rotary-->vr gain = 0
  • playback-->vr gain >0 (or <0)
  • (+/-) mismatch parameters

Drum, closed-loop:

  • flow-->vr gain = 0
  • flow-->motor gain = >0 (or <0)
  • rotary-->vr gain = >0 (or <0)
  • playback-->vr gain = 0
  • (+/-) mismatch parameters

Drum, playback:

  • flow-->vr gain = 0
  • flow-->motor gain = 0
  • rotary-->vr gain = 0
  • playback-->vr gain = >0 (or <0)
  • (+/-) mismatch parameters

This way you are also not as limited in choosing 1 of 4 block types. You can still construct the original types but also do things in between (e.g. have a drum playback block that also has some closed-loop rotary motion).

This looks really efficient and straightforward, I like it a lot. I can't see anything wrong with doing it this way. Basically one will control each component at the time. How does one deal with length of the "block" though in this case? An extra parameter?

EleonoraAmbrad commented 2 months ago

Thanks @EleonoraAmbrad that makes things much clearer. Also as you say, the playback condition needs to still be implemented for both the linear and drum environments.

There is one distinction I still need to clarify: "It would be analogous to reading out the PMW and translating it to VR motion, but for the rotation." - We can link the VR movement to the PWM that is sent to the motor controller but ultimately this is calculated directly from the flow sensor reading. The alternative is that we read out the rotary encoder signal that is produced as a result of the motor movement and use that to drive the VR motion.

This makes sense, it would be also more accurate I guess

EleonoraAmbrad commented 2 months ago

Another thing to clarify:

"Rotation probability: probability of the played back or generated flow stopping at each given time" - this suggests that there should be defined times for halting playback? Is this true or is it just with the randomisation parameters as implemented now?

This I don't know about yet. Since we haven't finally implemented the playback I'm not sure anymore which strategy we had decided to go with (playback in VR of another mouses movement based on flow sensor or random generation of visual stimuli?) Both strategies are working I'd say, as long as the times when the halts occur are logged. I'm sorry if I'm not getting your question :/

RoboDoig commented 2 months ago

Re: playback - for now I am going to implement this with analog input as the source of the playback signal. This can be coupled to visual flow or motor movement in any way you want with the gain settings. Or you can change the source to be something other than analog input.

RoboDoig commented 2 months ago

Closing this issue as implemented and tested.