Open deepaktalwardt opened 10 years ago
Hi,
we just show the latest frame as quickly as possible, so no syncing. Never thought about it, really, but it shouldn't be an issue. They both go through the same pipeline, so the timing should be about the same, and it would be hard to measure how they should be synced. Waiting to sync would be worse I think, as the big problem with Oculus Rift at the moment is the latency.
As for the composing, we use a combination of a sample file in the Oculus SDK and Direct Show (included in Windows). In the Oculus SDK there is a render function, and as input you get what eye you are currently rendering for. So when we get left eye, we render the world with the latest frame from the left camera. And then next time the right camera for the right eye. The Oculus SDK handles all the barrel distortion after we have rendered how everything should look.
The Direct Show part is how we hook into our RCA->USB converter and read the latest grabbed frame there. This is done by setting up a DS graph per converter including a SampleGrabber, and then just calling the grabber asking for the latest frame. (Figure 9 in our report) The frame is then uploaded as a texture and applied to a huge rectangle covering the viewport of the world. Some converters are recognized as webcameras, then you may be able to use OpenCV to read the frames instead. This is much easier but may be a bit slower. The speed shouldn't be an issue, though. I render 400 frames per second on a laptop with our program, and the video is only 25fps anyway.
Note that when using two cameras you already have space between them. So the IPD in Oculus Rift should be reduced to somewhere around 0 (with some tweaking).
Btw, there are commercial products achieving some of what we do. There is the Transporter3D, and the newest Fatshark googles support head tracking (but no stereo vision and it's not very immersive). If all you want is a playtoy buying some of that stuff is easier. Our project, however, aimed at having all this through a computer so we could overlay information on the screen etc. and then none of those solutions would work.
Mats
I am looking for a receiver/adapter that can convert 5.8gHz video into Wifi so that it can be streamed to the iPhone, has anyone seen anything like this?
On 4 Oct 2014, at 13:36, Mats Krüger Svensson notifications@github.com wrote:
Hi,
we just show the latest frame as quickly as possible, so no syncing. Never thought about it, really, but it shouldn't be an issue. They both go through the same pipeline, so the timing should be about the same, and it would be hard to measure how they should be synced. Waiting to sync would be worse I think, as the big problem with Oculus Rift at the moment is the latency.
As for the composing, we use a combination of a sample file in the Oculus SDK and Direct Show (included in Windows). In the Oculus SDK there is a render function, and as input you get what eye you are currently rendering for. So when we get left eye, we render the world with the latest frame from the left camera. And then next time the right camera for the right eye. The Oculus SDK handles all the barrel distortion after we have rendered how everything should look.
The Direct Show part is how we hook into our RCA->USB converter and read the latest grabbed frame there. This is done by setting up a DS graph per converter including a SampleGrabber, and then just calling the grabber asking for the latest frame. (Figure 9 in our report) The frame is then uploaded as a texture and applied to a huge rectangle covering the viewport of the world. Some converters are recognized as webcameras, then you may be able to use OpenCV to read the frames instead. This is much easier but may be a bit slower. The speed shouldn't be an issue, though. I render 400 frames per second on a laptop with our program, and the video is only 25fps anyway.
Note that when using two cameras you already have space between them. So the IPD in Oculus Rift should be reduced to somewhere around 0 (with some tweaking).
Btw, there are commercial products achieving some of what we do. There is the Transporter3D, and the newest Fatshark googles support head tracking (but no stereo vision and it's not very immersive). If all you want is a playtoy buying some of that stuff is easier. Our project, however, aimed at having all this through a computer so we could overlay information on the screen etc. and then none of those solutions would work.
Mats
— Reply to this email directly or view it on GitHub.
Thank you for the detailed response Mats! That really helps.
I see that you used video transceivers that transmit/receive Analog video signals (NTSC) directly from the CMOS cameras. I was wondering if you knew any transceivers that transmit digital video over USB? I am thinking of using the Minoru 3D camera (http://www.amazon.com/Minoru-3D-Webcam-Red-Chrome/dp/B001NXDGFY). It has a single USB output and we need to query at the right time to access the images from the two sensors. Is there a way to transmit this sort of video signal wirelessly? Thank you again.
Regards Deepak
FPV stuff is not exactly my expertise, so I recommend asking somewhere else to get better answers. :) Hobbykings.com has a very nice live support that can help find products that match.
Some cameras do output directly to WiFi (for instance GoPro), my experience is that it's too slow for this kind of use. For the GoPro I'm using this cable: http://www.hobbyking.com/hobbyking/store/uh_viewitem.asp?idproduct=33609 to get analog output that I send through standard FPV video transmitters. Not sure if it works for everything or just GoPro. I've never transmitted anything else but analog signals, so not sure what's possible.
Mats
Can confirm GoPro Hero 3 + WiFi is a poor choice for FPV. I tested the GoProH3+ with a QAV540G and there was quite a bit of latency (at least 500ms) and the range was very poor (perhaps 20-30m).
Hello, I must say this is a great project!
I wanted to know if you needed to sync the videos from the two cameras before streaming them onto the Oculus. Isn't there different sort of latency for both the camera streams? I would imagine that it might create some frame mismatch between the two eye streams.
Also, what sort of software platform did you use to composite the two digital video streams together, and add the required barrel distortion? I am not experienced with the Oculus SDK. Does it already provide functions for it? Thank you for your help!
Regards Deepak