Open mattrdowney opened 5 years ago
Late-night rambling:
Should be easy-ish with smart shader usage.
The hard part is generating a sphere with proper uv coordinates (note: it's actually using a scaled uv circle).
Then, you need to transform (simple) and rotate ( https://forum.unity.com/threads/rotation-of-texture-uvs-directly-from-a-shader.150482/ ) so that the overlay is centered at the correct texture coordinate and rotation.
This assumes you have a high resolution texture atlas to sample into.
Sphere uv coordinates are pretty straightforward.
Angle from forward vector is radius (range: [0, PI]-normalize). Atan2(y,x) is angle. Use angle and radius to find uv. The uv will ultimately be scaled by texture atlas.
You can also use a RenderTexture of any 2D game (square if you don't want stretching, corners will be clipped because it's a circle).
Really, the RenderTexture version is better because you don't need atlasing and it should plug and play with all existing 2D games.
The only part of the RenderTexture variant I'd research is whether or not looking around should affect the RenderTexture.
It seems useful but disorienting to have, although it may be livable.
The "RenderTexture variant" would be a standard feature for both types of overlay (although it is an optional camera feature).
The logic is that you have a constant guideline towards the camera focus, while the rest of space morphs radially.
There are issues with wraparound behind the player (the problems become apparent when head_turn_angle + field_of_view/2 = 180 degrees).
Rendering this space to black would be ideal, although I don't know how I would clip those pixels in the shader (it seems like a simple < PI check, but it might not be).
At this point, I'm going to presume azumithal projection is the best. The guideline theory above would look something like https://37tx5035jacw32yb7m4b6qev-wpengine.netdna-ssl.com/wp-content/uploads/2015/01/Lambert_azimuthal_equal-area_projection_SW.png
The worse scenario is the player wants to view the South Pole but their character is at the North Pole. In this case, you focus halfway between the two points 90 degrees between (note: you still have a single discontinuity at character-antipodal points). After you render, you must rotate the globe 90 degrees towards the character (or the other half of the angle) so that the viewer's gaze direction is honored.
The best case is the viewer is gazing at the character. The halfway between point is trivially either the character position or the view direction (and you render the world), then you just rotate the world 0 degrees (technically divided by two).
For the worst case scenario, you probably want to find a shader trick that can clip half of the pixels to make the discontinuity less jarring, but the algorithm seems like it would work.
This is remarkably similar to the accidental crosshair mechanic in early versions of Debris Noirs.
Any interpolation factor can be used [0,1], not just 50%.
One issue with all but one variant: audio mismatching with expectation. E.g. if you hear a loud noise in your right ear and turn to face it, you expect it to be there.
I suppose audio would probably be relative to the viewer's gaze at all times, but you still want visuals to be exactly synchronized with audio expectation. (E.g. This is relevant for objects within the 90 degree field of view.)
I forgot the easiest way to deal with discontinuities (and help overall): blend to black gradually.
I don't know exactly what the metric is for "probability a pixel is there" (with consideration for shape stretching), but...
The antipodal point should be transparent or 100% black The podal point should be unchanged. Intermediate points are blended (with the first 180 degrees being almost fully opaque).
The theory behind the blending might be some sort of spatial derivative (I think it's circumference of flattened circle divided by circumference of circle on sphere corresponding to original).
While the transparency idea might work well with static cameras, it could become very complicated very fast, even with the "constant guideline towards the camera focus" theory.
Terminology: PlanetariaOverlay is a derivative 2D virtual reality, whereas the games I have developed so far are integral 2D virtual reality.
So I'll be starting this feature.
It is probably the most useful feature in the Planetaria engine (since it has uses outside of 2D virtual reality including converting 2D videos to ~180 degree virtual reality videos -- note: you would need to know, guess, or detect the camera field of view to make this work well.)
It also should be the solution to #18 #95 #96 , which makes sense because one is for a small-scale case and the other is for a large-scale case (or local and global if you prefer).
This function can also be seen as the inverse of flattening a sphere.
Another good point to make: this function can use position/rotation/scale.
Rotation matrix (for turning the image) uv position transformation (for indexing into a moving screen on a large texture atlas) uv scale (for resizing the image or flipping it along an axis).
The underlying sphere mesh object doesn't really need to be regenerated ever (luckily) in the context of Planetaria games (unless you want to go full hipster and make games with perspective rendering into Planetaria games).
For real-world applications, I might want to program a function that takes a camera's horizontal and vertical field of view and contructs a perspective Planetaria sphere mesh that renders that camera's information as if it were 3D.
This is what I used to call 180 degree virtual reality, but it can work for 360 degrees. The idea is taking a image and draping it over the planet like a cloth.
(Little did I know when I first imagined 180 degree virtual reality, the top hemisphere is not a perfectly conformal mapping, and the bottom hemisphere can be approximated like the top hemisphere (with the bottom point being undefined).)
The algorithm has similarities to The World Is Flat (indie game).