Previous implementation relied on Apple's timewarping, and due to Annoying Circumstances with their compositor, we could not keep consistent DeviceAnchors between when we predicted a pose and sent it to ALVR, and when we rendered.
New implementation handles timewarp by rendering a quad at 1m in front of each eye, offset by the past matrix4x4 sent to ALVR. Feels way smoother, and feels way more correct when oscillating your head forwards and backwards. Only oddity I've noticed is some occasional judder on pitch rotations.
Based on #31
Previous implementation relied on Apple's timewarping, and due to Annoying Circumstances with their compositor, we could not keep consistent DeviceAnchors between when we predicted a pose and sent it to ALVR, and when we rendered.
New implementation handles timewarp by rendering a quad at 1m in front of each eye, offset by the past matrix4x4 sent to ALVR. Feels way smoother, and feels way more correct when oscillating your head forwards and backwards. Only oddity I've noticed is some occasional judder on pitch rotations.