OpenVR-Advanced-Settings / OpenVR-AdvancedSettings

OpenVR Advanced Settings Dashboard Overlay
GNU General Public License v3.0
1.28k stars 130 forks source link

Infinite Walk #368

Closed feilen closed 4 years ago

feilen commented 4 years ago

https://github.com/feilen/OpenVR-AdvancedSettings/commit/93af92f07369c16c7fdae620f7055ff9a4412026

Didn't want to submit a PR since what I have is a little rough around the edges, though it works beautifully (needs UI/configurability and one feature which I'll discuss at the bottom)

In VRChat, many people use what they refer to as 'infinite walk' to have what looks like natural motion around the world, by walking to a corner of their playspace and rotating the world so while walking in circles, they walk in a straight line in VR. (someone wrote an example here https://www.reddit.com/r/VRchat/comments/f6ylgg/i_made_a_guide_for_infinite_walking/)

While this works great, it's a pretty tedious manual process that only works on VRChat and other games with similar turning.

What I've implemented works with any game that supports infinite playspace size, it uses the 'snap rotation' feature, but automatically. When you reach a specified distance from your chaperone wall, it'll calculate the angle from your forward direction to the wall, find the second closest wall, and rotate the playspace to where what you're looking at in VR is along the longest stretch of that wall of your playspace.

(terrible illustration below) badillustration

This works great in VRChat, I can smoothly walk around and chat with people without ever pressing a button or glancing at my hand, though there's a tendency to spawn me off the side of a world if I don't press my 'reset position' bind between worlds.

There's a handful of things I need to finish before this is PR ready, but I wanted to get eyes on it now and see if anyone had any must-haves before I start cleaning everything up.

username223 commented 4 years ago

Very impressive effort.

I haven't taken a thorough look yet since I don't really have the time, but I can take a closer look later today.

A few things,

Cool feature, I'll check it out in VR later today.

feilen commented 4 years ago
ykeara commented 4 years ago

Sooo Didn't check and verify math, but yea good idea and is essentially a space constrained re-directed walking.

Current Setup I would think that it would be movecentertabcontroller vs chaperone tab. Also might be worth running by @Kung-vr as some of the math may not need to be duplicated.

Also the only other real major concern is making sure that the math is not prone to rounding errors, I know that Quaternions were required to avoid some rounding errors.

What would also be a nice is the option to do a gradual turning as well vs just strictly a snap turn though that might be more of a pain then its worth.

feilen commented 4 years ago

It's probably fine if the math has rounding errors, as we're only manipulating the offset seen by MoveCenterTabController, meaning any errors would just materialize as the playspace being rotated to a slightly less than ideal angle by a degree or so, not drifting or anything.

I would appreciate help on using quaternions for a more ideal calculation though, my current attempt consists of digging through the code find other examples of people using them... πŸ˜…

Kung-vr commented 4 years ago

This is great! Excellent use of the existing rotate space features.

In VRChat, many people use what they refer to as 'infinite walk' to have what looks like natural motion around the world, by walking to a corner of their playspace and rotating the world so while walking in circles, they walk in a straight line in VR. (someone wrote an example here https://www.reddit.com/r/VRchat/comments/f6ylgg/i_made_a_guide_for_infinite_walking/)

While this works great, it's a pretty tedious manual process that only works on VRChat and other games with similar turning.

We've actually had this in OVRAS for a long time with our existing snap, smooth, and space turn features. Specifically Space Turn was designed for manual redirected walking to smoothly cancel any physical rotation of the player without the need to try to match an arbitrary smooth turn speed.

Anyway, I really love that your implementation here snaps along the yaw of the nearest quad. I had only implemented a manual rotation cancel (Space Turn) because I thought the automatic loss of control would get annoying. A single snap with your 20cm buffer zone is a great middle ground between the benefits of smooth correction like Space Turn provides, and staying out of the way as an automatic feature should go for. With the only tradeoff being users have to make sharper turns or not travel in exactly straight lines as they approach the chaperone threshold. Also added benefit of a single snap is it should make fewer people sick.

An additional edge-case consideration though... if I'm free-walking around and enter the snap threshold near a chaperone quad, and remain nearest to that quad but decide I want to change direction in VR such that my new destination is behind the chaperone after the automatic snap-turn, I'd have to back up from the wall beyond the 20cm buffer and approach again right? Would it be a safety concern if users expected another snap but it didn't occur causing them to bump stuff IRL? Any thoughts on mitigation of that? We might be able to use haptics to inform the user as they approach the snap threshold. It'd help them prepare to turn as well as inform them via absence of haptics when a mistakenly expected second-snap is about to not occur. Haptics could even be specific to right/left so the user knows which way the upcoming turn will be. (This isn't necessary to implement, just some thoughts)

A couple of comments, on line 379 of ChaperoneTabController.cpp you start the process of getting an un-rotated pose, but then leave it in the rotated referenced frame. utils::initRotationMatrix( hmdMatrixRotMat, 1, 0.0f ); Rotating by zero radians here shouldn't affect the matrix you get back from poseHmd.mDeviceToAbsoluteTracking I'm assuming this is just a result of copy pasting from either Space Turn or the hmd rotation counter. Those need un-rotated reference frames because they compare yaw across event loop ticks.

In this case remaining in the rotated reference frame should be working out anyway because the chaperone you're comparing against should always be in the rotated reference frame too. So you can just skip the matrix multiplication and grab the device pose from the HMD to use directly.

I haven't checked in VR yet but I checked over your rotation math and if I'm correct you're first cancelling the relative yaw between the hmd and the hmd-to-wall then adding an additional 90 degrees to get the hmd facing along the wall, the turn direction of which is dependent on the hmd's yaw relative to the 2nd nearest wall, such that the user walks along the nearest wall away from the second nearest. That's what it appears that line 417 would do. Am I right?

It'd be great if the functionality on line 417 was made a little more modular, getting it out into a few functions like hmdToNearestQuadYaw() and maybe even hmdToSecondNearestQuadYaw(). That way we wouldn't need to re-implement that if I (or you) were to do a smooth turning automatic redirected walking implementation. I might get around to that sooner if your hmd-vs-quad yaws were in some nice easy to use functions like that. (Or if you're ever up to taking on a smooth implementation, you'd probably need that to avoid duplicated code)

Overall this looks great. Thanks for your time/effort making it.

Kung-vr commented 4 years ago

It's probably fine if the math has rounding errors, as we're only manipulating the offset seen by MoveCenterTabController, meaning any errors would just materialize as the playspace being rotated to a slightly less than ideal angle by a degree or so, not drifting or anything.

I would appreciate help on using quaternions for a more ideal calculation though, my current attempt consists of digging through the code find other examples of people using them... πŸ˜…

Rounding errors are less of a concern now than they were when I started throwing doubles all over the place. Originally I had been trying to mitigate chaperone drift. Small angle discrepancy makes a huge difference. I managed to get hmd centered rotation working, but it involves universe center and chaperone corner offset compensation. I actually swing the universe center around at possibly great distance when we rotate, then compensate the offset such that everything cancels and it appears we rotate in place. So you can see why operating at small angle differences over large distances could have been causing problems...

Ykeara is remembering that from the mid-implementation troubles though. I rewrote the whole motion system because I wasn't comfortable with any possibility of drift. So now we operate with a cached chaperone that is aligned to the current offset and rotation only once rather than dynamically updating the result of the last alignment compounding rounding errors like the original system.

The quaternion yaw calculation was infinitesimally more accurate, so it mattered back before the rewrite. We kept the doubles instead of floats because they are arguably more performant anyway on modern architectures. But quaternion yaw is also necessary to avoid gimbal lock. If you rely only on the atan2 method your snap turn could have people crash into walls if they look straight up or down. This is only necessary for the hmd pose. The yaw of the line segment from hmd position to the projected point on the chaperone quad can use atan2

username223 commented 4 years ago

@feilen

Copy-paste error, I'm on Ubuntu 19.10 and couldn't get it compiling with my system, so I had to make a 16.04 chroot (I'll clean that up)

That's fine. Is there a reason you're building the AppImage and not just building normally?

I can do either, there's some argument for always using const-reference for passed values and pointers for returned values (avoids unexpected surprises) but no strong preference there.

We're on C++17 so move semantics should remove the need for passing pointers. I would prefer not having to deal with pointers basically at all unless it's sufficiently contained in a wrapper. I know if you're writing more high performance stuff it matters more, but we get way more benefits from not having memory issues/nulls than we do the potential extra speed.

I can do that, still on the same tab on the GUI right?

Yes, we'll fit it in wherever there's space and it makes sense. I don't use the motion stuff too often so I'm not sure which tab is the best.

I ran it in VR and noticed something. My playspace is 5 sided, shaped like this so I can tell which wall has my TV. The current implementation acts inconsistently with this setup, sometimes rotating the full amount, sometimes rotating 10-15 degrees and sometimes not visibly rotating except for a small judder.

It's probably not possible to account for weirdly shaped rooms, but I thought you might want to know in case it influenced some design decisions.

feilen commented 4 years ago

What would also be a nice is the option to do a gradual turning as well vs just strictly a snap turn though that might be more of a pain then its worth.

I think that would be easy enough, though I have some doubts that it'd be less sickening.

We've actually had this in OVRAS for a long time with our existing snap, smooth, and space turn features. Specifically Space Turn was designed for manual redirected walking to smoothly cancel any physical rotation of the player without the need to try to match an arbitrary smooth turn speed.

Yeah, that's what I essentially piggybacked off of, that and the linux instructions are pretty much the only reason I got it done in a day πŸ˜›

An additional edge-case consideration though... if I'm free-walking around and enter the snap threshold near a chaperone quad, and remain nearest to that quad but decide I want to change direction in VR such that my new destination is behind the chaperone after the automatic snap-turn, I'd have to back up from the wall beyond the 20cm buffer and approach again right? Would it be a safety concern if users expected another snap but it didn't occur causing them to bump stuff IRL? Any thoughts on mitigation of that?

Yes, that's correct. While toying with it (for the hour or so when I had it working) that was mildly annoying, but not terribly so for a low-pace game (social things like VRChat) where this feature really shines. I'm not sure I'd want to try something particularly actiony with this without at least a week of getting used to it, so I'm not sure how it'd play out there. An audio feedback ('whoosh' noise? ping?) would probably be sufficient, but you could combine it with the other chaperone warning options for that. I'd also like to try a (toggleable?) option that limits the snap-turning to walls you're facing towards, so that you don't accidentally somehow back into a wall and end up flipping the world suddenly.

Another idea I'd thought of was sort of an 'edge friction', where as you get physically closer to the boundary, it adds an angular momentum equivalent to how close you are. That would fix the 'object just out of reach' issue, as trying to follow it would always end up turning you into your space, but it's probably more of an iron-stomach feature.

A couple of comments, on line 379 of ChaperoneTabController.cpp you start the process of getting an un-rotated pose, but then leave it in the rotated referenced frame.

Whoops, yeah you're right, copy-paste error.

I haven't checked in VR yet but I checked over your rotation math and if I'm correct you're first cancelling the relative yaw between the hmd and the hmd-to-wall then adding an additional 90 degrees to get the hmd facing along the wall, the turn direction of which is dependent on the hmd's yaw relative to the 2nd nearest wall, such that the user walks along the nearest wall away from the second nearest. That's what it appears that line 417 would do. Am I right?

Correct, the intent is that 'the object you're walking towards in VR' is always straight along the wall you run into, in the direction that gives you the most walking space.

It'd be great if the functionality on line 417 was made a little more modular, getting it out into a few functions like hmdToNearestQuadYaw() and maybe even hmdToSecondNearestQuadYaw(). That way we wouldn't need to re-implement that if I (or you) were to do a smooth turning automatic redirected walking implementation. I might get around to that sooner if your hmd-vs-quad yaws were in some nice easy to use functions like that. (Or if you're ever up to taking on a smooth implementation, you'd probably need that to avoid duplicated code)

Of course! This is just a rough draft I cobbled together in a day. I think my preferred implementation would be something more like double poseToNearestQuadYaw(pose) and vector<double> poseToQuadsYaw(pose), but perhaps best would be sprinkling some templated types around. I'll have a deeper dive soon, along with using a proper editor (I couldn't get qtcreator running in my chroot)

Rounding errors are less of a concern now than they were when I started throwing doubles all over the place. Originally I had been trying to mitigate chaperone drift. Small angle discrepancy makes a huge difference. I managed to get hmd centered rotation working, but it involves universe center and chaperone corner offset compensation. I actually swing the universe center around at possibly great distance when we rotate, then compensate the offset such that everything cancels and it appears we rotate in place. So you can see why operating at small angle differences over large distances could have been causing problems...

Ahh okay, so the true error in rotation is probably much higher than if it were truly natively rotating around the headset. I'll try to see if I can't make a quaternion-to-endpoint calculation then, I just wanted something that mostly-works to begin with.

I would prefer not having to deal with pointers basically at all unless it's sufficiently contained in a wrapper. I know if you're writing more high performance stuff it matters more, but we get way more benefits from not having memory issues/nulls than we do the potential extra speed.

I meant more that function(const type& a) makes it clear we're planning to use 'a' in a read-only way, and function(type* a) makes it clear we're returning something, whereas function(type& a) is ambiguous. I don't really think there's any performance benefits, it's just a style I've picked up as habit from my employer. Either's fine!

It's probably not possible to account for weirdly shaped rooms, but I thought you might want to know in case it influenced some design decisions.

My hope was that my present implementation would be able to handle that, as when walking in circles you're generally following the outline of the room, so any corner should be fine. I hadn't tested it with concave boundaries though, my space is like the one in my drawing. It's possible I could be failing to account for NaN or some such, I'll try to test it later.

feilen commented 4 years ago

That's fine. Is there a reason you're building the AppImage and not just building normally?

I did that at first to get it working, but now I just rsync the build out of my chroot and run it from there. Works fine!

Kung-vr commented 4 years ago

I'll try to see if I can't make a quaternion-to-endpoint calculation

Actually you're fine handling it the way you are so far (atan2 for line segment). As long as the hmd pose yaw relative to the tracking space is determined from a quaternion (which you're already doing) it should be fine.

Kung-vr commented 4 years ago

I think my preferred implementation would be something more like double poseToNearestQuadYaw(pose) and vector<double> poseToQuadsYaw(pose)

Yeah that would be even better and more versatile if we need to use any other tracked objects.

Also possibly the strange issues username223 was noticing could be due to trying to calculate the larger open space for the user by using the angle vs the projected point on the second nearest quad. (I'll have to test in VR myself to verify later too) Instead you might get a better result by using the nearest quad only and finding the distance to its corners and then turn the space to point the hmd parallel to the wall in the direction of the further corner on the nearest quad. If you want to collaborate on it I could maybe set up such a scheme later on once you've got a PR going or something.

feilen commented 4 years ago

Yeah that would be even better and more versatile if we need to use any other tracked objects.

When I have some extra time I'm going to abstract as much of that central clump of code out into utilities as I can.

using the nearest quad only and finding the distance to its corners

Oh! Yes, that's exactly what I want, cheaper to implement to. I can throw that together in a minute when I get home. My thinking is that the inconsistent behavior they experienced is related to the fact the current implementation, for each wall, checks for each other wall which is closer, when I actually only want to see which corner of the operated-on wall is closest.

Another issue that could be it is this situation: image

While turning that way is technically away from the nearest corner, it doesn't give you what I imagine you'd expect is the best space. This isn't an issue in purely-convex layouts.

Kung-vr commented 4 years ago

Though the 2nd nearest method could also give strange results in that instance because it's not an adjacent wall image

The alternative would be to use the playspace rectangle, but we don't keep this updated with motion, so it would have to be a calculation from offset center taking into account size and orientation of the original playspace rectangle. It would give up some walkable space for users to do that though.

In your above example, the bigger danger is approaching that concave section from above: image approaching the snap turn activation threshold for the next quad like that, and being turned to face the further corner parallel to that quad would cause dangerous issues. So that case would need to be handled.

feilen commented 4 years ago

Yeah, it's a little odd in that case too. For now I think the results will be best with a all-convex playspace, but I'd prefer to figure out how to support all of them moving forward. For the last thing you drew, I imagine the best scenario would be to wait until you pass the 'activation threshold' from the other side, something like this:

image

Does that make sense?

feilen commented 4 years ago

Cleaned up a bit and switched to using corners instead of nearest wall: https://github.com/feilen/OpenVR-AdvancedSettings/commit/f55b57c3b1079487a624fd3d94572b2d3927f652 (next commit is clang-formatting)

feilen commented 4 years ago

An idea: My original plan was to snap whichever angle was closest, always (so, if you came at the wall at a 45 degree angle, it would always rotate it 45 more degrees, not 135 if you were close to a corner) but it'd be possible to do:

This would mean that if you came at the wall anywhere in the middle, it would snap you to the most convienient angle, but if you came at it near a corner (such as when doing a long walk around the edges) it'd always snap away from the corner.

Edit: (expanding) the issue I had with the prior approach is that when you're walking around the edge of the space, you tend to walk very close to perpendicular to a wall, which means that any slight variance of angle can end up turning you towards the corner.

feilen commented 4 years ago

Tried that... it's way better! Pretty much always turns the way you expect it will.