bjornblissing / osgoculusviewer

An OsgViewer with support for the Oculus Rift
Other
106 stars 67 forks source link

Moved the code to a library. #8

Closed nicokruithof closed 11 years ago

nicokruithof commented 11 years ago

With the osg-oculus code in a library it is easier to link to other applications. The executable then links main.cpp with the library. Another executable links another main.cpp to the same library.

bjornblissing commented 11 years ago

Good idea to separate the code into a library. But I am a little bit hesitant to merge this. If this should be made into library we should also make a separate example application that uses the library. In principle make the OsgOculusViewer a application that depends on OsgOculusLib. Also Robert Osfield suggested that we maybe should look into the new osgViewer::ViewConfig subclass which he thought could be used to make support available to any Osg viewer application. http://forum.openscenegraph.org/viewtopic.php?t=12490

nicokruithof commented 11 years ago

The cmake file already defines a library and an executable that links to the library. I can move the lib files and the executable files to different directories if you want. That might make it more clear.

I don't know the ViewConfig class that well. My main reason was that I have another application that I would view with the oculus and therefore needed the separation.

bjornblissing commented 11 years ago

Another thing is that your pull request includes: 1097543 Which contains an error during setup of the slave cameras.

I will cherry pick your library commit and continue my work from there.

nicokruithof commented 11 years ago

Thanks. I still need to investigate more time in github. It never does what I want.

bjornblissing commented 11 years ago

I have a working example with osgViewer::ViewConfig up and running. The problem is how to handle the orientation changes from the Rift. Since the ViewConfig class is made primarily for setting up the display configuration there is no good way of handling dynamic changes of the view transform.

One option would be to create a special Oculus Rift Matrix Manipulator. The drawback of this solution would be that we need to implement special Oculus Rift Matrix Manipulators for each of the default osg matrix manipulator types:

So I have started a discussion with Robert Osfield on the e-maillist/forum on how to best implement this: http://forum.openscenegraph.org/viewtopic.php?t=12490

My idea is to stack the orientation changes on top of the orientation from the user selected matrix manipulator. (And in the future the Rift will support translation changes as well, so I guess we need to be able to support changes in translation as well.)

To complicate things further the ViewConfig class is a new concept introduced since osg 3.2.0. So I guess we need to have a fall back solution for people still working with older versions.

nicokruithof commented 11 years ago

Good to hear. It seems natural to me to assume that the user location is at the center position between the two eyes. From that you can offset for rendering the two eyes (this is internal to the oculus code). For other types of stereo rendering something similar has to be done right? How is it implemented there?

I'll check the OSG thread as well.

bjornblissing commented 11 years ago

I agree that the user location is at the center position between the two eyes and that slave cameras use the halfIPD as offset. This is kind of a no-brainer, but the difficult decision is how to handle the changes of the users head. For example, lets say that we are using the trackball matrix manipulator, then the user controls the rotation, zoom and translation of the objects with the mouse. Let say the users load a model of a city and zoom/translates into the city at street level. But then the user turns his head to look up towards the top of the skyscrapers. How should this code be handled. The Oculus Rift API gives us the orientation of the Rift Unit, so a simple solution would be to stack the orientation change of the Rift on top of the viewMatrix from the TrackballMatrixManipulator. A more correct solution would be to have a simple neck model as well (i.e. rolling your head will actually result in a small translation to the side since we are rolling along an axis aproximatly at chin level.) Head roll axis

In the future the Oculus Rift maybe will support translational tracking as well. Then the need for a good neck model would not be as important.

When it comes to other types of stereo rendering different strategies must be used depending of technology. Some require convergent frustums, i.e. 3D TVs. Others use parallel such as the Rift.

Nvidia got a pretty good presentation on the subject: Implementing Stereoscopic 3D in Your Applications - http://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf

nicokruithof commented 11 years ago

I never thought about the rotation point. Makes sense to use the neck.

Thanks for the link to the NVidia presentation, I'll have a look. Do you know Paul Bourkes site as well? http://paulbourke.net/stereographics/stereorender/

bjornblissing commented 11 years ago

I have pushed the changes, so now it is possible to use the osgViewer::ViewConfig concept. Which makes it much simpler to use the library.