Open rpavlik opened 9 years ago
I won't let the date fool me; I was there to see them do the work. They were solving many of the problems we are: developing display hardware, developing graphics hardware, writing the API for multiple graphics targets (PHIGS, OpenGL), handling multiple configurations (HMD, CAVE, fish-tank), even dealing with the C API (in that case, it was whether to remain compatible with K&R C). They had at that point been working on these issues for more than 5 years and had fielded multiple applications in several domains.
I'm having to face these issues while writing the D3D generic renderer that uses the nVidia API. I was planning to do a mono screen implementation first, but could go on to do the more general one if that's what Yuval wants me to work on.
Let's do mono first and get an end-to-end solution going from the game engine to the screen, allowing us to measure and understand the performance improvements. Once that's working, we can enhance to use the display descriptor.
On 6/13/2015 9:11 AM, Russell Taylor wrote:
I'm having to face these issues while writing the D3D generic renderer that uses the nVidia API. I was planning to do a mono screen implementation first, but could go on to do the more general one if that's what Yuval wants me to work on.
— Reply to this email directly or view it on GitHub https://github.com/OSVR/OSVR-Core/issues/148#issuecomment-111709582.
Back on the topic of the display overhaul, ran across this follow-up tech report that seems to have more complete Vlib info - both are now on the display nuggets page.
Robinett, W., & Holloway, R. (1994). The Visual Display Transformation for Virtual Reality. Chapel Hill, NC, USA: University of North Carolina at Chapel Hill. Retrieved from http://www.cs.unc.edu/techreports/94-031.pdf
@russell-taylor @godbyk In your work, as you find parameters that are missing, etc. please record them in this issue.
Here are some display properties that are queried by OpenVR:
Some of these are more generic than display-specific, but I've also omitted a number of tracker-specific properties. Is there an existing bug I should include those in?
Near and far clipping planes (in meters) need to be specified for the projection matrix. These should probably go into the HMD descriptor; their values affect when Z fighting happens but also when objects in the environment disappear.
The Oculus has lenses that are rotated around the Y axis in eye space, so that the viewing directions converge when looking through the center of each of them. We should figure out how to appropriately describe this (need an optics person to tell us what happens to the virtual screens).
Explicitly describe the interaction among parameters (how does rotate_180 affect where the center of projection should be specified, etc.) so that people writing the config files and people writing the code will have the opportunity to be on the same page.
Oh boy, rotated lenses. Rotate180 is on the chopping block, personally - I'd rather keep the viewport and the projection separate if possible.
Also, I was under the impression that at least a lot of Unity games do multiple rendering passes with different near/far clipping, hence providing input for projection matrix but not actually producing it. (Not to mention that NDC are different between OGL and DX)
The viewport and projection are separate, but you can't specify a negative viewport so the projection matrix needs to do the work of flipping things over appropriately. It was easy to implement flipping (not tested) by swapping left/right and top-bottom values before building the matrix.
From discussions with Greg, it looks like Unity is going to render into a texture and then the RenderManager will slap that into a window, undistort, and send to the screen. In that case, it will use the provided projection matrix as a hint. Haven't yet worked out how to send it subsets of the info so it can build its own. That is a kettle of fish for tomorrow.
Good to know about NDC being different. I have an adapter for each rendering library, so that just means we'll have different ones for each.
Yeah - you might want to look at the steamvr header since it does a lot of the rendering abstraction stuff - I think it's probably the closest to a good design I've seen for such a thing, and of course it is informed by use in actual games.https://github.com/ValveSoftware/openvr/blob/master/headers/openvr.h
Thanks for the pointer!
Note that using a negative IPD enables cross-eyed stereo. So please don't put a test to rule this out in the parsers or the code. Might be a useful tidbit to add to any documentation as well. Tried it out with my flood data set viewer and it works.
There also needs to be a way to describe which screen we should be displaying on, which turns into where on the virtual frame buffer we are displaying based on the sizes and placements of existing screens. This may be a location parameter added to the height/width fields, with a specified parameter to mean "don't care" if we want to allow that (which we may not).
How does num_displays differ from video_inputs? We probably want to unify this.
Actually, the "which screen to draw on" probably belongs in the RenderManager constructor params, alongside DirectMode on/off, rather than in the display Json file. These will be read from a separate Json file, whose format is yet to be fully defined.
In a product such as the dSight, you have multiple input options:
1 input --> 2 physical screens 2 inputs --> 2 physical screens
that is the difference between number inputs and num displays
Note that there is also a list of eyes. If you have two eyes in the list, and have told it to use horz_side_by_side, then it seems like the num_displays and video_inputs are always going to be identical to one another. In the 1-input mode: display_mode = horz_side_by_side num_displays = 1 video_inputs = 1 In the 2-input mode: display_mode = full_screen num_displays = 2 video_inputs = 2
(In both cases, you have two eyes in the array of eyes.)
so num_displays
is in fact redundant - was added without consultation to "simplify" the unity rendering.
This may be in a separate video-description config file, but there needs to be a way to tell the display code to rotate the whole display by 90, 180, or 270 degrees to handle HMDs that are natively in portrait mode (like the OSVR HDK). If in the display file, it should probably live under "resolutions".
Two parts to this: