Closed jimrandomh closed 3 years ago
There is a fascinating mystery in the position-tracking code, which is: why do we multiply the tracker Y-translation by 2.5? There's currently a group of settings, positionmultiplier and position[axis]_multiplier, which are necessary to get the scale of position tracking to match up with the world scale. It seems like, if everything else here correct, then these all ought to be 1.0. But they aren't; default values were determined empirically, as (2.0,2.5,0.5). Also they vary from game to game. Also they didn't seem correct when I tried them. The symptom of not having these settings right is that when you translate your head, the rest of the world either translates the same way, or translates the opposite way, by an amount proportional to the amount the settings are off by.
There are a couple related mysteries, like why the code that counter-rotates the tracking offset to correct for head-rotation doesn't work. So, what's going on? Let's start with the code that actually applies the translation. The position-tracking offset gets applied as a vertex-shader modification, in ShaderMatrixModification::DoMatrixModification. It looks like this (whitespace added):
outLeft = in
* m_spAdjustmentMatrices->LeftAdjustmentMatrix()
* m_spAdjustmentMatrices->ProjectionInverse()
* m_spAdjustmentMatrices->PositionMatrix()
* m_spAdjustmentMatrices->Projection();
outright = in
* m_spAdjustmentMatrices->RightAdjustmentMatrix()
* m_spAdjustmentMatrices->ProjectionInverse()
* m_spAdjustmentMatrices->PositionMatrix()
* m_spAdjustmentMatrices->Projection();
In other words, we:
This is all good and correct, except for one thing: Projection and ProjectionInverse are not to the game's projection matrix! They're the forward and inverse of a different projection matrix, created in ViewAdjustment::UpdateProjectionMatrices, like this:
D3DXMatrixPerspectiveOffCenterLH(&matBasicProjection, l, r, b, t, n, f);
D3DXMatrixInverse(&matProjectionInv, 0, &matProjection);
Looking at the definition of D3DXMatrixPerspectiveOffCenterLH, we see that it depends on the near- and far-planes, the FOV, and the aspect ratio. (HFOV is a function of zn and (r-l), and VFOV is a function of zn and (t-b)). And the documentation shows us a matrix at the bottom, and we can see that if the near and far planes and aspect ratio are different, this will scale the axes. I believe this is where the position-tracking axis-specific multipliers come from.
There are a couple options for how to address this. One option is to apply position-tracking offset in the view matrix, rather than the perspective matrix. If I'm reading D3DProxyDevice::SetTransform right, then this is how eye adjustment is currently handled. The other solution would be to use the game's real projection matrix. Luckily, the game hands us its projection matrix in D3DProxyDevice::SetTransform, so with a little bit of plumbing, this should be achievable. Either of these should get the number of position-tracking tuning parameters down to at most one, or ideally down to zero.
There is currently a setting called Distortion Scale, whose function - and whose correct value - is somewhat mysterious. It's related to FOV; increasing it acts like reducing FOV, and decreasing it acts like increasing FOV. So, what is distortion scale?
Games running in Perception render into eye-buffers that have a monitor aspect ratio (16:9) rather than HMD aspect ratio (varies between hardware, closer to 1:1). What distortion scale does is logically equivalent to this procedure:
There is one extra wrinkle, which is head-roll. Perception has multiple ways of applying roll, for compatibility reasons; some games have pixel shaders that depend on screen-space-up being world-space-up. One of the strategies is "pixel shader roll", which waits to apply roll until after the game has drawn everything - in effect, rotation the display between steps (2) and (3) above. This comes with a caveat, which is that it brings off-screen area into view; having some overdraw (a too-high distortion scale setting, counterbalanced by a too-high FOV setting) helps with this.
The main problem with distortion scale is that it makes it hard to figure out what the FOV setting should be; there are enough interacting pieces that you can't just look up the right value, and users resort to setting it empirically, which doesn't work very well. It also wastes some fill-rate (not a big deal), and will complicate attempts to use APIs to figure out the correct FOV.
The first band-aid fix is to separate the FOV setting from the VRBoost injected FOV, so that in the final output adjusting the distortion scale setting will only affect the presence of black bars, and not also affect FOV. The next step will be to figure out the aspect-ratio math so that the FOV setting is the real FOV, and the next step after that will be to get that FOV from the Oculus SDK or a device-specific config file. In the long term, we might want to decouple the eye-buffer resolution and aspect ratio from the game's expectations about these things, to get better image quality and performance, but that will probably produce a lot of compatibility problems.
Hi James
Are you aware of the recent change that applies additional FOV to the projection matrix? I think this might be what you are getting at above.
It was added in order to be able to get extra FOV on games that don't support VRBoost FOV or any other way of increasing the FOV of the game past a certain point. Additionally, it can be used as a performance booster. For example in Skyrim I can drop the VRBoost FOV right down to 75, and then bump the PFOV (projection FOV for want of a better name) up again so it feels natural, but the game's engine is only rendering for what it thinks is an FOV of 75, which unsurprisingly results in a performance bump.
I looked at the recent changes you made in the EvenNumberOfMinusSigns branch in your fork and I am I not sure the changes for head tracking (which look good btw, would be nice to lose that funny camera translation) don't appear to take this FOV reprojection into account. Losing that feature would be a step backwards as it provides a number of advantages: Increased FOV for games that have a hard limit (Far Cry 3 no longer requires flawless widescreen), performance bump for games that need it (Skyrim) and allows support for games that have no FOV adjusters at all (Rainbow Six Vegas).
It requires the an inverse projection matrix to be applied then the reprojection for the new FOV to be applied, which is at odds with the current changes for your positional tracking fix.
Further to your previous mail, we used to use the distortion scale for zooming in and out and disconnected head view, but that had issues, so Grant resolved it with a separate zoom, which works much better, as it is now, the DIstortion is never adjusted, essentially fixed HMD specific scaler.
I am not sure the eradication of all the knobs and buttons is necessarily a perfect solution, since users like to be able to tweak things, but a good set of defaults is always necessary (as some users don't like to tweak things). As we now have 3 FOV "adjusters" involved: Game FOV (which can be set in VRBoost sometimes, or in a game ini file), Distortion (which is now never changed) and PFOV (which is adjusted through the 3D reconstruction menu, or a hot key adjuster), we could just come up with a set of defaults for these, which we essentially have already, rather than trying to remove them entirely.
Apologies if I am misunderstanding the intention of your previous post. Please take a look at the PFOV stuff and perhaps that is already some way to implementing what you are suggesting above.
Regards Simon
On 20 May 2015 at 02:53, jimrandomh notifications@github.com wrote:
There is currently a setting called Distortion Scale, whose function - and whose correct value - is somewhat mysterious. It's related to FOV; increasing it acts like reducing FOV, and decreasing it acts like increasing FOV. So, what is distortion scale?
Games running in Perception render into eye-buffers that have a monitor aspect ratio (16:9) rather than HMD aspect ratio (varies between hardware, closer to 1:1). What distortion ratio does is logically equivalent to this procedure:
- Take a screen in the HMD's aspect ratio
- Put the game's (wider) image inside it, touching the left and right and with black bars on the top and bottom
- Zoom in on the center by a factor of (1+DistortionScale), cropping off some of the left and right and some of the black bars
- Send the result into the barrel-distortion shader
There is one extra wrinkle, which is head-roll. Perception has multiple ways of applying roll, for compatibility reasons; some games have pixel shaders that depend on screen-space-up being world-space-up. One of the strategies is "pixel shader roll", which waits to apply roll until after the game has drawn everything - in effect, rotation the display between steps (2) and (3) above. This comes with a caveat, which is that it brings off-screen area into view; having some overdraw (a too-high distortion scale setting, counterbalanced by a too-high FOV setting) helps with this.
The main problem with distortion scale is that it makes it hard to figure out what the FOV setting should be; there are enough interacting pieces that you can't just look up the right value, and users resort to setting it empirically, which doesn't work very well. It also wastes some fill-rate (not a big deal), and will complicate attempts to use APIs to figure out the correct FOV.
The first band-aid fix is to separate the FOV setting from the VRBoost injected FOV, so that in the final output adjusting the distortion scale setting will only affect the presence of black bars, and not also affect FOV. The next step will be to figure out the aspect-ratio math so that the FOV setting is the real FOV, and the next step after that will be to get that FOV from the Oculus SDK or a device-specific config file. In the long term, we might want to decouple the eye-buffer resolution and aspect ratio from the game's expectations about these things, to get better image quality and performance, but that will probably produce a lot of compatibility problems.
— Reply to this email directly or view it on GitHub https://github.com/cybereality/Perception/pull/58#issuecomment-103720262 .
Distortion scale historically came around at the very early Oculus versions and is a parameter of the warp calculation. I am not sure that saying this also increases FOV is entirely correct. The game is rendering the same FOV no matter what this is, It is just the the amount of warp it produces will make you see more or less of this. As Simon mentions above this used for a zoom, which was wrong. Now it should be a pretty fixed value. However each user will still have differences depending on their IPD and lens they are using (as to whether they are seeing black borders).
At the current time we are in a difficult situation. Oculus wants SDK rendering (which would remove all of this), but at the same time are dumping support for DX9. Therefore we need to tread carefully and not to produce solutions that will either be redundant or stop working.
"The first band-aid fix is to separate the FOV setting from the VRBoost injected FOV, so that in the final output adjusting the distortion scale setting will only affect the presence of black bars, and not also affect FOV."
It is probably confusing to call two separate things the same name. True FOV really has nothing to do with black borders. Distortion scale does only affect black borders. It doesn't change anything in game (FOV), just the users' view of this. If this was to only affect the black borders you would have to apply a separate zoom component in the FX file, which seems counter intuitive.
I would be interested to see where you are going with this but not sure of the possibility of being able to ascertain true matched FOV without reworking the aspect ratio or SDK rendering.
I saw the changes applying FOV to the projection matrix in master, but haven't tried them out in any games yet. That seems like a big step in the right direction; if we have full control over the projection matrix, then much of what I wrote in the last comment would be moot and we could use that.
The fact that Oculus is pushing for SDK rendering and also is going DirectX 11-only is concerning. I think the main consequence is that we end up wanting to duplicate most of the work done by the SDK, including some very hard parts like managing timing and asynchronous timewarp. We also might end up with a separate utility that uses DirectX 11 and the SDK, gets a bunch of parameters and saves them to a file for the DirectX 9 proxy to use later.
When I talk about removing settings, I don't really mean removing the scene entirely; rather, I mean making it into something that can users don't have to worry about, and then putting it in an Advanced menu or somewhere similarly out of the way.
We also might end up with a separate utility that uses DirectX 11 and the SDK, gets a bunch of parameters and saves them to a file for the DirectX 9 proxy to use later.
I fear that might not even be possible. I get the impression that the vast majority of the detail is moving out of reach, hence the removal of client rendering. The OVRService now manages all the complicated horrible bits without the developer needing to know much about them (from what I gather). As for the dropping of DX9, I imagine that is to facilitate the really funky recent changes like a proper timewarp implementation (including positional), which would be practically impossible to do in DX9 (given its rather poor support for multithreading as I understand it)
Unfortunately for Vireio it means we are going to have to guess at things like the distortion parameters (we don't even use a mesh) and chromatic aberration params as the SDK matures, as the Kernel part of the SDK is being deprecated and won't be available in future versions. Certainly come CV1 we'll either have to best-guess these things.
Also, we have to hope that OVR don't remove extended mode altogether, that could make life very difficult if they did. Maybe I'm being pessimistic, for now however we appear to be ok, though I do have all sorts of trouble trying to run DX9 games with the rift turned on with SDK 0.6.0.0 installed (without Vireio even running), so much so I un-installed and rolled back to 0.5, which was also not encouraging.
On 20 May 2015 at 15:11, jimrandomh notifications@github.com wrote:
I saw the changes applying FOV to the projection matrix in master, but haven't tried them out in any games yet. That seems like a big step in the right direction; if we have full control over the projection matrix, then much of what I wrote in the last comment would be moot and we could use that.
The fact that Oculus is pushing for SDK rendering and also is going DirectX 11-only is concerning. I think the main consequence is that we end up wanting to duplicate most of the work done by the SDK, including some very hard parts like managing timing and asynchronous timewarp. We also might end up with a separate utility that uses DirectX 11 and the SDK, gets a bunch of parameters and saves them to a file for the DirectX 9 proxy to use later.
When I talk about removing settings, I don't really mean removing the scene entirely; rather, I mean making it into something that can users don't have to worry about, and then putting it in an Advanced menu or somewhere similarly out of the way.
— Reply to this email directly or view it on GitHub https://github.com/cybereality/Perception/pull/58#issuecomment-103901108 .
It sounds like we need to reach out to Oculus and find a collaborator on their engineering team. Asynchronous timewarp in D3D9 is probably a lost cause, but we should certainly be able to get things like distortion constants from them. We might even be able to get them to provide some prototypes; I think we can make the case that they want Perception to be in good shape when CV1 launches, since it greatly expands the catalog of AAA games playable in VR.
From experience I don;t think we would get very far from that. Oculus has always been staunchly opposed to injection VR. We can get the distortion figures polled from the Oculus SDK (currently). This may change on a future release (maybe even 0.6.0) as they are removing the option for client rendering. The Warp calculation could probably use bringing up to date. I don't think the code has changed much since DK1 (apart from chromatic aberration) so we could probably take the latest example from the SDK
My instinct is to think that different people inside Oculus probably don't all agree, and that much of the opposition is probably that they want to avoid putting their name on something that'll give people simulator sickness. The key will be to make the case that it's possible to get everything right and the latency low. I do think that case can be made, but there's a bit of a way to go.
I thought I was going to go in and fix some FOV handling, maybe hook up the API call to replace the setting. I may still do that. I spent a while stumbling around confused, because nothing FOV- and distortion-scale related was doing quite what I thought it should be. Eventually I dug into the distortion shader, and the key was stepping through it reasoning really carefully about what coordinate spaces things are in. It turns out, the distortion shader was applying an aspect-ratio correction in a place where it shouldn't, and this resulted in an artifact which looks very much like incorrect FOV - but on the Y-axis only.
I also did some random refactors, added a "which gamepad" setting (my controller really wanted to be Player 2 for some reason), slightly reduced latency (which I have no way to measure), and got rid of the comfort-mode setting (it's just always on and you can unbind the keys if you don't want it).
The aspect ratio thing.. that's not related to the pixel shader roll implementation is it?
As for comfort mode, sounds reasonable approach, though I would suggest the keys are unbound to begin with, and anyone wishing to use it (possibly only one person in the entire world) can then bind the keys themselves. It shouldn't be on by default really.
On 27 May 2015 at 06:57, jimrandomh notifications@github.com wrote:
I thought I was going to go in and fix some FOV handling, maybe hook up the API call to replace the setting. I may still do that. I spent a while stumbling around confused, because nothing FOV- and distortion-scale related was doing quite what I thought it should be. Eventually I dug into the distortion shader, and the key was stepping through it reasoning really carefully about what coordinate spaces things are in. It turns out, the distortion shader was applying an aspect-ratio correction in a place where it shouldn't, and this resulted in an artifact which looks very much like incorrect FOV - but on the Y-axis only.
I also did some random refactors, added a "which gamepad" setting (my controller really wanted to be Player 2 for some reason), slightly reduced latency (which I have no way to measure), and got rid of the comfort-mode setting (it's just always on and you can unbind the keys if you don't want it).
— Reply to this email directly or view it on GitHub https://github.com/cybereality/Perception/pull/58#issuecomment-105766859 .
How exactly will the comfort mode now work? If I am using a gamepad I would want the right stick to rotate me normally. Will it continue to do this if unbound? but then serve as comfort if bound? We really need it to work like this...
It's not related to pixel shader roll; the distortion bug manifests regardless of which roll type you're using, and seems to go all the way back to the very first versions of Perception.
I think Comfort Mode (snap turning) is better/more useful than you realize. With a snap size of 45deg, it can be the main way of turning. The settings in Oculus' Tuscany demo are right-stick turn, left and right shoulder buttons snap-turn 45 degrees. The settings I'm using when I play Skyrim are right-stick snap-turn 45 degrees on an X threshold, no turn axis. When there is a turn axis, it's not handled by Perception, but by the game itself.
For the short term, defaulting the snap-turn to unbound and letting people use stick yaw is acceptable. But I think we should keep in mind:
Stick yaw control is such VR poison that removing it may be the right move -- swivel chair/stand or don't play. -- John Carmack
In the past, Perception has been exclusively limited to people with iron stomachs, for other reasons. But we don't want to stay that way. The simple thing would be to copy Tuscany's controls, and have both stick yaw and snap turning. Unfortunately, gamepad buttons are a scarce resource, especially when adapting games that designed their control scheme to exactly match the controller. The long-term ideal solution is to (a) intercept DirectInput calls so Perception can fully control what the game sees, (b) build in a button-remapping tool, to replace the role people currently fill with XPadder and AHK, and (c) throw in a first-time-setup walkthrough where people make their hotkey selections and reconcile Perception's hotkeys with the game's hotkeys. That would also make it feasible to deal with silliness like not being able to use the Perception menu while at the game main menu because of buttons passing through to the game and picking game menu items you don't want.
The problem as you say is that buttons are scarce. If you look at a game like Skyrim you really cant use anything other than a stick. Therefore you have a dual stick mode whereby its a regular turn upto a certain number) or you give the user the choice. In terms of an Open solution I don't think "we took this away because it may make you sick" is really the right way to go. As long as users know that they can toggle between the two, lets leave the choice upto them. This is coming from someone who hates comfort mode turning; ironically it actually makes me feel sick. Regular turning I am fine with. John Carmack's comment must be read (currently) with the context of a wireless Gear VR. Suggest someone use a DK2 with swivel chair is a recipe for broken cables or asphyxiation.
Having thought about this I actually think the best method is still a toggle with 3 settings: 1) Regular Right Stick Turning 2) Regular Turning with X Threshold 3) Full Comfort Mode In addition to comfort mode left / right bounds if other keys are free.
I think this gives the user the most amount of control over the program.
I don't mean to suggest we get rid of the option to do regular right-stick turning. We don't even have the ability to do that; it's under the game's control. What I mean is we should push people towards good defaults.
You said snap-turning is worse for you (from a simulator-sickness perspective) than stick yaw. Does that also apply in, eg, Tuscany? One problem with snap-turn in the last released version is that it defaulted to 90 degrees, which is too much to stay oriented; 45 degree snap turns might work better for you. This also helps with the cable-tangle problem; if you've got some way to stay oriented in the real world (I do this by feeling my desk's legs with my feet), you can limit the swivel to a small range of angles and use snap-turn to cover the rest.
The buttons situation is not as bad as it first appears, because one axis is freed up by replacing pitch control with motion tracking. With right stick X as a discrete snap turn, right-stick Y can also be two discrete things; in my case, I use right-stick up for jump, and right-stick down for crouch/sneak toggle.
Have we ever considered a version of snap turn that snaps to where they are looking? That seems like it could solve the orientation issues with preset snap turns, and also be somewhat intuitive.
On Wed, May 27, 2015 at 10:19 AM, jimrandomh notifications@github.com wrote:
I don't mean to suggest we get rid of the option to do regular right-stick turning. We don't even have the ability to do that; it's under the game's control. What I mean is we should push people towards good defaults.
You said snap-turning is worse for you (from a simulator-sickness perspective) than stick yaw. Does that also apply in, eg, Tuscany? One problem with snap-turn in the last released version is that it defaulted to 90 degrees, which is too much to stay oriented; 45 degree snap turns might work better for you. This also helps with the cable-tangle problem; if you've got some way to stay oriented in the real world (I do this by feeling my desk's legs with my feet), you can limit the swivel to a small range of angles and use snap-turn to cover the rest.
The buttons situation is not as bad as it first appears, because one axis is freed up by replacing pitch control with motion tracking. With right stick X as a discrete snap turn, right-stick Y can also be two discrete things; in my case, I use right-stick up for jump, and right-stick down for crouch/sneak toggle.
— Reply to this email directly or view it on GitHub https://github.com/cybereality/Perception/pull/58#issuecomment-105953590 .
Not exactly, but I tried something similar in Tuscany a while back (a lock-rotation-to-head button). That didn't work, because it can't handle large rotations without excessive neck movement. I think snap-to-look would have a similar problem: it only helps you stay oriented if you bring your head back to center each time you do it, so for a 180-degree turn you've got way too much neck motion.
Yeah, I get the sickness in both Tuscany and Gear VR with comfort mode. If you watch the video I did for the comfort mode tutorial (with Homefront) I always turn it down to the minimum level (and this really should be default ~20/30 degrees). I do much prefer standing and actually turning around all the time than using any stick, but as said this doesn't work too well with DK2 (its great with gear VR). Note that Carmack is saying that any yaw based movement is quite bad (inferring that he also means comfort mode). Fingers crossed for Vive.
We could actually nullify the right stick yaw (in memory) but it would lead to the same judder effect you get if you try and move the pitch.
That's a really good idea with the right stick jump / crouch. I hadn't thought about it but will probably do that myself as sounds very intuitive.
Okay so I think we are on the same page. Ideal situation being to give the user the most customisation abilities with the simplest interface. I don't think current the X Offset turning is possible in VP. May be worth have this as an option somewhere.
Josh: I don't quite see how this will work... unless it is at an pupil detection level. By looking at something you are already looking at it. Therefore all movement will therefore start from this position. As James says, the only way this works is to return your head to looking straight ahead but with the scene staying fixed.
A data-dump from my notes, about what's currently not ideal. Most of this I'm not going to tackle for awhile; in the short term I'm going to untangle FOV and the distortion shader the rest of the way, and try to get principled (SDK-based) values for VRboost FOV and for YOffset (which is something to do with the lens Y center-position).
Remaining rendering issues that I'm aware of:
My untested theories about where latency is coming from:
Remaining miscellaneous issues that I'm aware of:
Issues that I don't expect to ever be fixed:
Hi James,
I see you have upgraded the SDK. Does this have any affect on the crashing behaviour? Can you consistently load games with 0.6.0?
When would be a good time to merge? Now?
Grant
There are probably some new bugs, but I don't think there's anything so severe it shouldn't be merged. Upgrading to 0.6.0 broke SDK pose prediction (I had to stub out some related stuff to get it to compile), but I don't think it was working before anyways.
I'm going to make an attempt to get Perception onto the Oculus SDK renderpath, which, if it works, would make Timewarp work (the synchronous version that lowers latency, not the asynchronous version that covers for low framerate). At that point it'd probably be time to give it a major version number (3.0) and make a release.
Very interested to know how you intend to do it, as the SDK no longer supports DX9; are you going to bridge between the two DX9 -》DX11 ? On 30 May 2015 03:37, "jimrandomh" notifications@github.com wrote:
I'm going to make an attempt to get Perception onto the Oculus SDK renderpath, which, if it works, would make Timewarp work (the synchronous version that lowers latency, not the asynchronous version that covers for low framerate). At that point it'd probably be time to give it a major version number (3.0) and make a release.
— Reply to this email directly or view it on GitHub https://github.com/cybereality/Perception/pull/58#issuecomment-106976901 .
The SDK Pose Prediction did work before. There was a noticeable difference. It would be a shame to lose this but understand that is the way the SDK is leading us. Good luck with the SDK integration. I think it will be a difficult task but if you can get it working it help us future proof VP. If not we do have some other ideas.
I've looked into this a bit more. There are two viable-looking approaches. The first is to use the main ovrHmd_SubmitFrame path, giving it D3D9 textures and initializing with ovrRenderAPIType::ovrRenderPI_D3D9_Obsolete. I think this will work because while all the D3D9 structs are gone from the header files, they aren't gone from the binary interface - hence legacy D3D9 apps mostly still working with the new SDK installed - and the D3D9 version specific portion of the 0.4 SDK header is small and simple.
The second approach, which I'm leaning towards, is to use the render path found in the SDK's Samples/CommonSrc/Render/Render_D3D11_Device.cpp, which contains timewarped mesh distortion shaders and has an Apache license header on it. It does have a D3D11 dependency, but the dependency does not run deep; there's also an OpenGL implementation in the same directory, and the commonalities are factored out, so a new D3D9 version should be no more different from the D3D11 version than the D3D11 version is from the GL version. Doing it this way requires setting up some of the parameters ourselves, and managing the frame-timing dance, which is tricky but feasible.
Regardless of which of these strategies is chosen, all the non-distortion features that got hacked into the distortion shader need to be moved elsewhere or dropped. This includes the top-left Perception logo, VR mouse, and the Z-filtering thing.
Good luck with that and let us know how it goes. I think it is probably wise at the moment to leave the merge until you have got this working. The danger is that if we take your latest work with 0.6.0 integration we are actually in a position where we have a non working program as we can't run it on the 0.5.0 runtime (and 0.6.0 causes crashes and prevents DX9 devices being created).
Hi James, Any luck with this or is it starting to look like this may not be feasible?
I'm still working on it, but have had less time to spend than I'd hoped.
This pull-request is a placeholder to make the branch visible; it's not actually ready to merge. I'll be using this page as a development log. Starting with an inspirational quote:
As of now, Perception contains a fair number of things that have the flavor of minus-sign errors, which cancel. Unfortunately, sometimes they almost cancel, but not quite - making things look slightly wrong. The goal of this branch is to get the rendering precisely right. That means finding principled answers to the parameters that are currently set empirically or left as knobs for the user. It means not having a distortion scale interacting with a FOV override to cancel out an aspect ratio mismatch between games and displays. It means bringing HUDs into world space, so they'll counter-rotate with your head rather than stay glued to your peripheral vision.
All of these things are hard! I expect a few bumps along the road, in the form of broken compatibility with games and devices I'm not testing. As per my usual development style, there's also going to be refactoring, though somewhat less merge-conflict-prone this time.