HMD::init() is called once and HMD::render() - on every rendering cycle.
My ultimate goal is to display two images which represent stereo content rendered with another application and exported as stereo image pairs. However, for now leftTextureHandle and rightTextureHandle are two different textures but containing the exactly the same 2D image.
Theoretically, if the center of my eye, the center of the lens and the center of the display are aligned on the same axis per eye, my eyes should not converge and the result should be as if I am looking at something positioned at infinity. Of course, not exactly, since the focus won't be at infinity but I hope you understand me.
This theory worked for Oculus DK2 with the Oculus C++ SDK (passing the same texture handles to their API), however, in HTC Vive I can clearly see that my eyes are actually diverging and that I cannot merge the inputs from my left and right eye.
If I shift the left texture ~30px to the right and the right texture ~30px to the left I start merging the content and looks OK. But it bothers me immensely that I don't understand what is going on and why is it happening like this. HTC Vive is set correctly, my IPD measured correctly and I see no visible artifacts for which I assume my eye and lens are well aligned.
Is it that the screen and the lens are not centered with respect to each other? How one could properly align two images? Using overlay is not an option for me.
I am a beginner with HTC Vive and VR as a whole and I have the following very simple code for interaction with the HMD:
HMD::init() is called once and HMD::render() - on every rendering cycle.
My ultimate goal is to display two images which represent stereo content rendered with another application and exported as stereo image pairs. However, for now leftTextureHandle and rightTextureHandle are two different textures but containing the exactly the same 2D image.
Theoretically, if the center of my eye, the center of the lens and the center of the display are aligned on the same axis per eye, my eyes should not converge and the result should be as if I am looking at something positioned at infinity. Of course, not exactly, since the focus won't be at infinity but I hope you understand me.
This theory worked for Oculus DK2 with the Oculus C++ SDK (passing the same texture handles to their API), however, in HTC Vive I can clearly see that my eyes are actually diverging and that I cannot merge the inputs from my left and right eye.
If I shift the left texture ~30px to the right and the right texture ~30px to the left I start merging the content and looks OK. But it bothers me immensely that I don't understand what is going on and why is it happening like this. HTC Vive is set correctly, my IPD measured correctly and I see no visible artifacts for which I assume my eye and lens are well aligned.
Is it that the screen and the lens are not centered with respect to each other? How one could properly align two images? Using overlay is not an option for me.