Closed RoboDoig closed 5 months ago
http://neuralensemble.org/MotionClouds/index.html Motion clouds should be implemented instead of moving dots
Gratings + motion clouds as stimuli. Both linear and rotational movement in VR. Distortion mapping for the screens. Parametrisation of stimuli. Open loop or closed loop motion.
One thing I'm a bit unclear on still is how translation of the animal (as opposed to just rotation) should be dealt with. I.e. should the stimuli get 'closer' as the animal moves forward in a VR arena manner? Or will translation drive other parameters of the stimulus?
I've tried out the MotionClouds approach in Bonsai a little this week and it looks like the library can definitely be used to generate visual environments in Bonsai.
One concern I have is that the library is quite slow at generating initial stimuli. Also if generated dynamically we would need to continually convert and update the texture from the MotionClouds library --> Bonsai --> GPU which might introduce a delay.
A question I have therefore is how dynamic the motion cloud stimulus needs to be? Would it be sufficient to pregenerate stimuli? If we can do this then the whole stimulus bank can be preloaded on the GPU.
Yes, pregeneration of motion clouds will be fine.
One thing I'm a bit unclear on still is how translation of the animal (as opposed to just rotation) should be dealt with. I.e. should the stimuli get 'closer' as the animal moves forward in a VR arena manner? Or will translation drive other parameters of the stimulus?
There will be no translation, just rotation.
One question for motion clouds for linear movement is how we map a single file on the continous corridor in BonVision. Is there e.g. an ideal size above which BonVision suffers?
One question for motion clouds for linear movement is how we map a single file on the continous corridor in BonVision. Is there e.g. an ideal size above which BonVision suffers?
What does linear movement mean in this context?
infinite corridor, where we need the single image to repeat this needs
Hope this makes sense.
On 23 Nov 2023, at 11:11, RoboDoig @.***> wrote:
One question for motion clouds for linear movement is how we map a single file on the continous corridor in BonVision. Is there e.g. an ideal size above which BonVision suffers?
What does linear movement mean in this context?
— Reply to this email directly, view it on GitHub https://github.com/neurogears/vestibular-vr/issues/24#issuecomment-1824119464, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRGCCI4ZMKNX75RSAFNJKTYF4ONNAVCNFSM6AAAAAAZHT7T3OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRUGEYTSNBWGQ. You are receiving this because you commented.
PR #36 adds an example motion cloud workflow in the ae-dev branch. I will eventually pull this into the main trial-logic branch as well when the rest of the logic is completed.
@ederancz just visually it looks like the example motion cloud stimuli transition seamlessly when the sequence wraps back around but I haven't tried with a large diversity of stimuli.
Gratings / random dot stereograms / motion illusion-inducing pattern.
Ideally loaded and processed on same machine as ONIX + task control within Bonsai. Implement JSON config file to define stimulus bank.