Open mattrdowney opened 5 years ago
With respect to the second paragraph: "idea is that you use a normalized float4 to represent all values in cartesian space, and somehow translate that into a local xyz"
I think this would mostly work. Reason being a vector in the positive ana/kata axis (fourth dimension (0, 0, 0, 1)) has six directions in which it can travel (left, right, up, down, forward, back gradients). It does not solve rendering, but I think collisions and physics could be straightforward-ish.
Based on a 1D 2D and 3D planetaria comparison, colliders seem far more complicated.
1D planetaria - circle world, any shape is a collection of arcs on circles. 2D planetaria - sphere world, any shape is a collection of arcs on spheres. 3D planetaria - hypersphere world, any shape could be a collection of "hyper"arcs on hyperspheres.
That being said, the shapes could also be length/area/volume (with the volume case being harder than the other two).
Assuming it is possible, there would certainly be interesting collider level-of-detail functions (and auto-generation based on 3D meshes).
For computing detail, you could do always overestimate (circumscribe), always underestimate (inscribe), or minimize error.
It gets interesting when you think of the Euclidean 4-sphere space, because you can only take a cross-section of the world (since the represented data is four dimensional).
I am guessing there has to be a semi-intuitive way to look around in a 4D world.
(Note, both 3D & 4D Planetarias are more thought experiment than anything else, without better motivation.)
Originally, I wanted to use a different rendering engine for 3D / 4D Planetarias, but I would be interested in rendering from the center of the world as well as a local view of the universe (from the surface of the sphere).
Planetarias can have interesting properties if you use the right graphics system.
Normally, viewing a mesh from the inside has no detail, but you can imagine adding a color function to any arc (that can be interpolated along the length of the curve as well as below).
The benefit of thinking about it this way is that a collection of objects defines the color at every possible coordinate set (e.g. (x,y,z))
This is possible with .svg files in a 2D Planetaria already, although I don't have gradient colors yet.
It's not readily extensible to N-dimensional Planetarias and their hypersphere world (using 2D .svg files), but it's do-able.
A vertex in a 2D Planetaria generally has two connected edges/arcs (left and right). Similarly, 3D Planetaria vertices generally have three connected edges/arcs (think the corner of a cube). 4D Planetarias should have four edges (if I am thinking correctly). Generalizing, for concave/convex hulls, you just need to connect N points for N dimensions (unless I'm making a logical fallacy and learned nothing from Flatland). The only thing that intrigues me is the circle, which has two connected edges per vertex (I think) which is making me wonder if I am terrible at visualizing how behaviors extend into higher dimensions (I probably wouldn't know it). I should probably specify this in terms of [minimum edges to represent something with that dimension]. This is because you can have points with extra connections (think fanning a 100-gon), but if you want a minimal 3D shape representation it's something like a tetrahedron or a cube. The first point doesn't count (it's zero dimensional), adding a second point gives you a line (1D), adding a third point gives you a plane (2D), adding a fourth point gives you a volume (3D), adding a fifth point gives you a ? (4D), etc.
For the circle case, the minimal connection is a single other point, but you could have more arcs.
The reason this is relevant is because 1) you have more edges and 2) it makes compression algorithms harder (because they have to decide how to reduce error).
I think it's possible these hyperarcs have interesting properties that would allow them to approximate 2+ traditional arcs, but it's hard to know for sure.
Obviously a hyperarc going directly from A to B is not interesting, but you can consider how the concave/convex circle analogy might extend into 4+ dimensions.
One lucky factor that's not immediately apparent:
Games like Miegakure have to focus on 4D geometry and simulated 4D geometry (probably through interpolation).
2D Planetarias (3 dimensions as a normalized vector) use 2D geometry as their rendering primative.
3D Planetarias (4 dimensions as a normalized vector) use 3D geometry as their rendering primative.
So, while creating a 4D level editor is non-trivial, you don't need to do anything fancy to create 4D models for inbetweening.
I'm intimidated by 4D level editors (of course), but since you would already be creating a "4D renderer" the code should overlap (I think).
Moreover, this would be a 4D concept where you can yaw/pitch/roll/cross -- i.e. rotate the camera unrestrained by 3D cross-sections, which sounds interesting and mind-melting.
I am now curious how virtual reality head-mounted display look controls would work in an unrestrained 4D virtual reality game.
It seems like if you locked one axis into up/down to avoid 4D gimbal lock, you could have a single thumbstick axis (e.g. left/right only), that does the mind-melting rotate in 4D.
Even if it would be difficult to make a curated level in this hypothetical 4D system, I imagine procedural generation wouldn't be too bad for people who enjoy procedural generation (I personally do not).
Arguably, all this extra work would be similar to what Miegakure is doing, but based on my intuition this would be better at avoiding discontinuities.
What I'm mostly curious about is whether there's an intuitive-ish way to turn a 6-degree-of-freedom head tracker into a 4D head tracker (with auxiliary controls that are sort of intuitive).
First person cameras don't really use camera roll, but that doesn't mean you can just remap roll to the fourth dimension without causing motion sickness. At the same time, the 4D was gonna cause motion sickness anyway and the camera roll as a 4D control would create an "Upside Down" (a la Stranger Things) of sorts.
Also, the concept of using 4d coordinate systems to minimize floating-point imprecision is more appealing to me now:
1) you can approximate cartesian space when the angular size is relatively small (I already knew this), which should work approximately with rendering (preprocessing meshes (e.g. skinned meshes) isn't worth it) and collisions might work precisely.
2) With the right math, you should be able to simulate cartesian math (perfectly?) so long as the space is mapped properly. I think physics would be approximately as good as it would otherwise be.
I think this is the first time I considered expanding the technology of Planetaria to a 3D engine (because it does not appear to be a meaningful thought at first glance).
The basic idea is that you use a normalized float4 to represent all values in cartesian space, and somehow translate that into a local xyz format (if that is possible).
It probably has several logical inconsistencies, but if it could be done, you could have collision detection, physics, and rendering code that mostly ignores floating-point imprecision.