Closed xEvoGx closed 7 years ago
Ah perfect...
https://www.youtube.com/watch?v=lHzCmfuJYa4
...and yea that could be done rather easily on our own, but then again if it's so simple, it's one of those "Features" that can look very good for VRTK. :smile:
One more post for those interested! I didn't even consider using Vignetting to very easily create the Tunneling based on velocity!
http://fusedvr.com/does-this-help-your-vr-sickness-tutorial/
someone go tell @fuseman to come and contribute to the repo :)
+1 for tunneling options.
Another +1. Google Earth uses this as well, and it works great: https://vimeo.com/177549565.
The biggest question we're going to probably have to answer is: What method of implement do we use?
From what I've seen there are several different methods of implementing tunnelling/vignetting:
Which leaves us with key questions like: What are tradeoffs of each one? Is there any reason one is better than the others? Are there any performance tradeoffs that make one method a bad idea? Sadly since most of the available information I can find comes in the form of research papers of people trying this for the first time as a concept and people doing tutorials on the first technique they found, there isn't really an answer to what kind of performance issues each method has.
Tradeoffs and advantages for each I can think of so far:
+1 This was implemented by Google's internal Daydream team at a GDC demo at GDC so I think it's definitely catching on.
Has there been any more discussion on this? Tunneling is becoming very common in recent VR games that use any non-teleporting locomotion. I'm a fan of the script method that allows adjustable parameters, as it would help developers figure out what parameters work best to combat motion sickness while still retaining immersion in their games.
Edit: And I tried adding the VRTunneling shader script linked, but it only accounts for angular velocity. When I tried having it apply to velocity in any direction (using the rigidbody created by the VRTK BodyPhysics script), the motion-in-place system has almost no velocity, which I think is because the rigidbody is constantly toggling isKinematic on and off. Any workarounds for this?
Edit2: Nevermind, since it doesn't use velocity, you just use (player.transform.position - lastPosition).magnitude / Time.deltaTime
to get their speed.
Duplicate of https://github.com/thestonefox/VRTK/issues/222
So during the upcoming VRDC, Ubi's Olivier Palmieri is going to be talking about how they keep people from ralphing while playing their upcoming game Eagle Flight (which does appear vomit inducing from gameplay vids) by utilizing 'blinders' when the player is near objects that would whizz by their peripheral vision such as buildings. Effectively 'Tunneling' (I'm assuming), also used in Adrift
Anyway, any thoughts in implementing something like this? If not like the version in the linked video below, perhaps this could be similar to a geometry-based blink mechanic I've seen (no pun intended), using the geometry (Iris?) to converge at a point around the center of the players vision (should be a parameter that can be set to have larger or smaller 'blinders'). If that proves a bit cumbersome, perhaps just an alpha texture on a plane in front of the camera?
Here's Tunneling: https://www.youtube.com/watch?v=lKnM5gC-XpY
...and a sample blink I randomly found: https://www.youtube.com/watch?v=nDko7_iqXlQ
Thanks!