jahuth / virtualretina

Bugfixes to the VirtualRetina Simulator of Adrien Wohrer
Other
7 stars 5 forks source link

Retina input and output #5

Closed rcrespocano closed 7 years ago

rcrespocano commented 7 years ago

Hello,

I've two questions about the input format of the Retina executable.

On the other hand, I've a question about the output format of the Retina executable.

Thank you in advance.

Kind regards, Rubén

jahuth commented 7 years ago

Hello Rubén,

To use a video as input to virtual retina, it has to be converted into single image frames. In the paper you mentioned, the authors first interpolated the video such that its framerate is 100Hz (100fps, each frame being 10ms long) and then saved the sequence of images. (see edit) Avidemux can be used for frame interpolation and ffmpeg can be used to convert videos into image sequences. (it is possible that older avidemux versions (2.5 or older) can also save a video as image sequences, but newer versions can not to my knowledge)

ffmpeg -i movie.avi -q:v 1 frame_%06d.jpg

(-q:v 1 determining the quality, the %06d gives a 6 digit number padded with zeros. Virtual retina does not work well with non-zero padded numbers (frame20.jpg is sorted before frame2.jpg), so make sure that you add enough digits)

For the webcam unfortunately it is the same, you will have to capture the video and convert it into images, so you can not get instant processing of your camera feed in virtual retina.

My python reimplementation of the virtual retina model in convis can use webcam input via OpenCV and can also stream input and output continuously, but it is not documented right now. I will write an example script for that soon.

A software that is right now capable of using a video stream directly is the extended version of COREM:

The documentation for setting up COREM to recieve a video stream is here, but you might have to write a small program yourself to feed your webcam images to the opened tcp port.

A group from Pisa, Italy used COREM with a webcam, maybe that version is also helpful for you:

I hope I could help you a bit! Jacob


edit: I confused the paper with a different one. For this paper the authors generated synthetic image sequences at 200Hz directly by applying micro-saccade movement to an image.

jahuth commented 7 years ago

For convis there is now an example on how to use the video input and output streams:

rcrespocano commented 7 years ago

Thank you for your reply and for your explanation.