Closed jherico closed 10 years ago
How is it using it incorrectly?
Under ideal circumstances, the ideal scaling factor for the render target is the reciprocal of the scaling factor you use to normalize the distorted texture lookup coordinates to between 0-1. As far as I can tell, SteamVR is doing precisely that.
I think you're creating a relationship between the scale used to correct for the shrinking caused by the coefficients used for distortion, and the scale you'd want to ensure that the render target has enough information to be sampled onto the actual screen without artifacting.
The existing coefficients have a relationship to the scaling factor, and pretty much nothing else. If you swap the coefficients (1, 0.22, 0.24, 0) with (0.5832, 0.1283, 0.1400, 0) you get the same output, but GetDistortionScale() returns 1.0 (or something very close to it) for the default fit point.
What you're suggesting is that the K values have been specifically chosen to both produce the proper distortion and at the same time shrink the image by a certain factor which also serves as the ideal factor by which to increase the size of the render target compared to the size of the screen. I don't think that's the case.
You should probably read section 5.5.3 of the Oculus SDK Overview because it explains that there is a relationship there, as well as (partially) why.
The extreme TL;DR on it is that the distortion scaling is effectively increasing the area of the post-distortion texture space that is getting normalized back to (0,1) - and as such the texture itself has to scale equivalently as well in order to maintain the same texel size both pre- and post- distortion.
And that does not imply that the K values come from the distortion scale. It is actually quite the opposite as the scaling factor is simply a ratio between the post-distorted radius to the edge of the screen and the radius to the edge of the screen. This can, of course, be seen in Util_Render_Stereo.cpp in libovr.
The quality of the image will degrade due to sub-sampling from the scaled image, resulting in blocky or blurry pixels around the center of the view.
OK, I've re-read and played with my example apps a bit and gotten a handle on this. Because of the currently chosen K values, the calculated post-distortion scale ends up being the exact ratio between the number apparent resolution at the center of the image and the actual resolution of the screen. And I believe that ratio will hold as you bring the fit point close to the lens axis. Also I see that this is actually the same mechanism that the OculusWorldDemo uses to determine how large to make the render target. My mistake then, thanks for the clarification and reference.
BTW, I didn't suggest that the K values came from the scale. I just said they had a relationship with them. I'm familiar with how the scale value is calculated.
https://github.com/ValveSoftware/steamworks-vr-api/blob/master/src/drivers/oculus/oculushmdlatest.cpp#L136 uses OVR::Util::Render::StereoConfig::GetDistortionScale() as a multiplier to determine the ideal target render size.
However, this scaling factor is supposed to be used to correct for the current distortion coefficients, which shrink the image. You're using it correctly here: https://github.com/ValveSoftware/steamworks-vr-api/blob/master/src/drivers/oculus/oculushmdlatest.cpp#L263
If OVR corrects the distortion coefficients so that they no longer shrink the image, the GetDistortionScale() method will probably start returning 1.0.
The current distortion scale is about 1.7, which is significantly higher than what is required to match the pixel density at the center of the image for the overall image.. a fixed constant of 1.4-1.5 is probably better.