mattrdowney / planetaria

A Unity framework for Euclidean 2-sphere games (e.g. 2D virtual reality games) [quasi-MIT license]
Other
10 stars 2 forks source link

Planetaria User Interface #123

Open mattrdowney opened 5 years ago

mattrdowney commented 5 years ago

PlanetariaText using https://stackoverflow.com/questions/40529025/unity-c-sharp-get-text-width-font-character-width https://www.gamedev.net/articles/programming/engines-and-middleware/how-to-implement-custom-ui-meshes-in-unity-r5017/ https://docs.unity3d.com/ScriptReference/UI.Text.html

mattrdowney commented 5 years ago

Use Screen Space - Camera with an internal rotator (consider PlanetariaCanvas) that applies any head rotation to the entire user interface (so it is responsive and 360 degrees).

mattrdowney commented 5 years ago

PlanetariaText should take the RectTransform and compute a inscribed quadrilateral according to the following rules:

1) Figure out if abs(xMin) or abs(xMax) is greater. [Same for y.] 2) if abs(yMin) is greater, form an arc from the bottom three points of the rectangle. Positions are computed by ScreenPointToWorld. Similar for abs(yMax) and top three points. (Note: this means the text is vertically aligned, and not necessarily horizontally aligned.) 3) compute negative normal at the corner furthest from the rect center. 4) calculate intersection of great arc (corner, -corner_normal) and the opposite side's three points arc. 5) use calculated intersection to construct an arc for the second edge so far. 6) create an arc parallel to the first, starting at intersection, and ending on the first point of intersection for the closest of the four rect sides (according to center of mass). 7) construct the fourth arc by computing the intersection of (last_intersection, last_intersection_normal) and the first arc. 8) clip first arc according to final intersection. Then 9) use the constructed rects to make a high granularity interpolator rail (because I don't have spherical rectangle working). 10) use the spherical rectangle to convert positions from the rect via interpolation.

mattrdowney commented 5 years ago

It's worth noting I really dislike the current implementation of PlanetariaText on principle (because it is visibly a quad).

All things in Planetaria should be spherically-aligned or decent approximations.

mattrdowney commented 5 years ago

Ideally, the user interface is locked into world space (so head turns affect the display), but reset when the user blinks (a feature that can't exist without eye tracking). I.e. diegetic with reset on blink (You can implement this with a digital black curtain blink.)

Non-diagetic (stationary) user interfaces might work for spherical interfaces, but the rectangular version visibly drifts.

mattrdowney commented 5 years ago

A high quality resource on this subject: https://www.gdcvault.com/play/1023929/Integrating-2D-UI-with-VR

mattrdowney commented 5 years ago

There is plenty of chances to "off-load" the computation to the human brain.

E.g. when I was thinking about the user interface in world space that resets on blinking, I forgot I could make both eyes see different things (this only works because this is non-diegetic, otherwise I would feel like it goes against 2D virtual reality on principle). To make the user interface follow the player, you can do something similar to interlacing in the old days, where you update world space one eye at a time, that way the brain fills in the gaps of how to connect A and B. This specifically came up in a context of "hey wouldn't it suck if I became a virtual reality developer for pay and someone patented all of these neat stupid ideas (which take half a brain to both think up and implement / completely obvious)". The strategy here might not work perfectly, but there's definitely a way to warp the text to make it intelligible to the brain (even with really fast head rotations relative to framerate).

mattrdowney commented 5 years ago

Notably, there's no reason to use screen space if everything is going to be projected onto a plane anyway. Using World Space is certainly the most versatile, you just have to position the screen at the correct coordinates i.e. forward vector (and that's it - at least when there's no zoom).

I do have to verify that world space canvases can still render over things in the foreground, otherwise I'll need to rethink this, but doing this gets rid of a bunch of issues with imprecise rotations (e.g. before I was rotating elements around something that wasn't Vector3.zero because you cannot render the user interface plane at an infinitesimal pixel).

I'll figure out the interplay between zoom and ideal user interface coordinates at some point. One problem (you can imagine) is a zoomed in perspective of a huge world where the user interface would take up less than a degree. If you look behind the player, 1) should the user interface move with you? probably yes and 2) should the user interface scale up to more than a degree? probably yes

mattrdowney commented 5 years ago

I feel like a useful user interface feature would be the walk into a 2D painting concept (I'm pretty sure the other place I elaborate on this is Mastodon). For a user interface panel you see a preview that expands to its natural field of view (as a spherical rectangle) and after that blends from the full snapshot to a 360 degree photo version.

E.g. you can imagine a settings page preview that expands to surround the player. Then you just need to put an escape/back button that does the same thing in reverse.

mattrdowney commented 5 years ago

It's worth mentioning that the aforementioned algorithm might be similar to the "Looking Glass Technique" by YouVisit (mentioned here https://www.meetup.com/NYVR-Virtual-Reality-NYC/events/232648123/ ) -- it's been > 2 years since I've seen it, so it's hard to know.

The technique would be useful for art games (not just menu systems).

mattrdowney commented 5 years ago

Continuing on the idea of "interlacing"/alternating the rendering for the user-interface between the two eyes:

I noticed that Debris Noirs used to cause a lot of motion sickness when the camera zoom was zero, but now that the zoom is -.99 the motion sickness basically disappeared. This might give a hint on how to reduce motion sickness. E.g. you could consider the eyes as two overlapping unit spheres. When you are rendering one eye as the primary eye (remember you alternate every frame) you render it as its ideal form. When you are rendering the other eye, you take the old positions and rotations of the glyphs to be rendered and project the ideal forms. I am leaving projection ambiguous here because I don't know if it would be a 3D shadow, a quasi-2D translation, or something else entirely.

Another concept never (theoretically) renders any eye in its ideal form but converges based on some interpolation parameter.

mattrdowney commented 5 years ago

Eureka (sort of).

The ideal form just renderers to a normal unit sphere. The derivative information can be reimagined as the projection of the pixels from the sphere into a plane containing the same information (doesn't work perfectly since human vision can exceed 180 degrees, but it's close). Note, this must be tangential to the sphere in the direction that particular eye is looking (its forward vector). This should be conceptual, not a real 2D image buffer, considering how the plane should be infinite.

You can think of a glyph as a spherical rectangle with a position, rotation, and scale.

(For the following I use Space in the sense of World Space, Local Space, Object Space, Camera Space.)

The ideal eye is rendered. The ideal eye for each object 1) projects the object onto the Plane Space 2) projects the Plane Space data onto the spherical rectangle's allotted area according to some best-shape-preservation function.

The best-shape-preservation function glosses over some difficult algorithm choices, but I think this is a useful starting point.

There may be something like Sphere1 -> Plane1 -> Plane2 -> Sphere2 projection.

mattrdowney commented 5 years ago

It's worth mentioning that this could be a dead end (none of this is guaranteed to work).

I like the idea of asynchronous rendering between the two eyes, but that could lead to a whole host of other problems.

mattrdowney commented 5 years ago

It's also possible planes are not the ideal intermediate conversion format, or that you only need one such format that is focused at an inflection point between the eye position this frame and the eye position last frame.

mattrdowney commented 5 years ago

This actually raises an important question for me: "Would rendering eyes with two screens with a half a frame discrepancy improve video quality? (Assuming the hardware were convenient.)"

Would that be like having double the frames per second from a practical perspective? Would it remove any underlying issues with perception or cause new problems? (Assuming low-persistence display.)

mattrdowney commented 5 years ago

Additional notes: triangular grids of pixels using circle-y pixels is an interesting and potentially do-able idea.

I used to call this a hexagonal grid until I realized triangle grids were better, at least from a manufacturing perspective.

The basic idea is that you want to pack red + green + blue pixels on a grid.

This is circle packing (essentially) and you want to color the circles so that no two colors are adjacent. While this can be solved with the 3-Coloring Problem, if I remember correctly the 4-Coloring problem is more convenient.

This means you basically decompose the multiple grids, with at least one grid of red pixels, at least one grid of blue pixels, and at least one grid of green pixels.

This hardware doesn't need a concept of the red/green/blue existing at the same point; instead grids have offsets and colors.

One advantage to decomposing a grid into multiple grids is the possibility of doing rendering asynchronously. The way games work now, this is a disadvantage, but it is possible that it won't be in the future.

Even if games still have to render a screen all at once, they can get the precise location of their pixel, with the caveat that you would render it as red-green-blue and discard the two colors that are unused.

Similarly, the pixels can be rounder, avoid a square grid screen-door effect, and have a slightly better packing density.

mattrdowney commented 5 years ago

A simple algorithm choice would be:

Render both eyes at the same time. Create a spherical user interface but put the camera zoom for the user interface at about -0.5.

This would potentially create the same effect as the new Debris Noirs user interface.

You don't necessarily need a sphere, but it should surround the player. Another possibility is a cube with a -0.5 zoom.

mattrdowney commented 5 years ago

Hmm, taking the hypothetical new virtual reality display hardware a step further:

Splitting pixels into red/green/blue does provide some advantages when taking refraction of light from the lens into account. (Because you can have non-uniform surfaces based on how the light will refract.)

This is all overengineering (especially since lens information is unknown), but maybe a more practical idea will emerge out of it.