MattRix / Futile

A super simple Unity 2D framework
http://struct.ca/futile
833 stars 130 forks source link

Use Futile on a 3D surface #33

Open MattRix opened 11 years ago

MattRix commented 11 years ago

See if Futile can be used in 3D, or rendered onto a 3D surface as a texture... it might require doing raytests with touches and turning them back into touch events, or something like that.

bizziboi commented 11 years ago

The renderqueue might mess things up with depth-sorting if the scene has transparent objects. I have been running with the z-offset instead of renderqueues and it works well for me - my scene mixes 2d and 3d (albeit in orthographic view). As a texture of course shouldn't be a problem, provided rendertexture is available.

MattRix commented 11 years ago

So you've been using Futile with z-offsets for each sprite or something along those lines?

bizziboi commented 11 years ago

Okay, this answer is going to be longer than I initially anticipated. The problem you are wanting to solve here is not trivial. I will go in a fair amount of detail, simply because I don't know your level of 3D knowledge - in no way do I mean to be condescending. I'm also not very great at explaining things.

First the short answer to your question - no I don't give each sprite a Z. I work on the same premise as Futile - triangles within a batch are rendered in the order they are in the batch. So order of addition to the batch determines priority. Of course that only works within a batch. At a batch split, you create a new layer, which by virtue of RenderQueues you force to be rendered on top of the previous one. This works great. No sorting issues whatsoever. The problem arises, if you want to mix with existing geo at different depths in the scene, and you want a Futile layer underneath or inbetween. Because of the renderqueues, Futile will always be drawn on top, no matter at what scene depth they are.

So I gave a stage a Z-value. By doing this I can also guarantee draw order, since they are in the transparent queue they will be drawn in far-to-near order. So now I have control over the draw order of stages without any overhead in the mesh generation...all vertices are still at depth 0 within their respective meshes. Of course you still need to sort the layers within a stage. Since depth sorting will sort them by depth, offseting layer one by a tiny bit will draw it on top of layer 0. So basically when I set the z on a stage I actually set Z on every layer within the stage to be stageZ+layerWithinStage*littleBit. The upside is you can very easily change the sorting order on the fly.

To sum it up, when mixing with existing geo, using renderqueues makes it hard because Futile doesn't respect depth ordering. It will always render on top of existing transparent objects unless they are all also shifted to queues inbetween Futile's queues.

There are a few caveats here. This works very well with an orthographic camera. Because objects at different depths have no different foreshortening the vertices will match for the different layers. You also need to set the camera's TransparentSortMode to Orthographic from code (it's not in the inspector), because otherwise Unity sorts by distance from the camera in 3d space instead of along the depth axis, so when an object mves away from the center it will actually get sorted differently which can cause objects that are behind another to actually be sorted in front of another. The difference between layers can be small enough to not cause issues between stages, as long as they are reosanably offset.

With a non-orthographic view, but along the axis, you can still use this. The tiny offset would not work well with pixel perfect graphics, but with 2.5D you generally give up on that as well since you want foreshortening. If the distance between layers that you intend to show at the same depth is small enough you won't notice and sorting is correct - as long as you use the OrthoGraphic sortmode.

Also, you can cheat the system in various ways.

(which, as a slight aside, brings me to your wonderment whether recalculatebounds is needed - in your normal use-cases, no. You build the stage and don't grow it to the left or right, so the bounds are valid. However, if you start adding sprites on the edge, the bounds do change. Unity culls objects when the center of the bounds is farther away from the screen edge than its extents indicate, so if the mesh grows wider, but you don't recalculate the bounds, and you scroll the mesh away from the screen center, at one point it will be culled even though some vertices should still be visible. If you don't want to incur the cost of the RecalculateBounds you can either keep track of max and min yourself and set the bounds (center = (max+min/2), extents = (max-min)/2) or set the bounds to very large extents on x and y initially and just live with the fact that the mesh for that layer is never culled - which 9 out of 10 times is what you want anyhow)

Now this is where the post gets even longer than I anticipated. Projecting a Futile scene in a true 3D scene is non-trivial, unless you go with render-to-texture. I'll try to explain some reasons why.

In order to get proper sorting in a 3d scene you definitely can not use the renderqueues. You simply need to be sorted correctly on depth, so you'd have to use depthvalues. As I assume you know, the way the renderer works is, it first renders non-transparent objects fron front to back with z-test and z-write (so that pixels that are already obscured don't need to be drawn, and thus shaded, again) and then it renders all transparent objects from back to front (so transparency blends correctly on objects behind it), with z-testing on (so it doesn't overwrite solid objects) and depth writing off (to get the least glitching with transparent objects that cross eachother into the depth..dunno the proper term). The z-test makes it harder to have a transparent object sitting on top of another plane...as it's the same depth, Generally you use z-bias for that so it interprets the depth a tiny bit different and will render the pixel at that depth. So having a (one) transparent layer sitting on top of a non-transparent object is okay. But, for example, having it sit on a transparent object gets trickier. Since the transparent shader doesn't use z-writing but does use z-testing you can't guarantee the render order, it all depends on the Center of the bounds of the transparent object, and the center of the bounds of your futile object. Disabling z-testing won't work, as you'd render on top of solid geometry that is actually farther away from the camera, enabling z-writing on the transparent object won't work as it would get rendering artifacts with other transparent objects.

Okay, say you only want it to work on solid objects.

The multiple layers - with depth, since renderqueues make interactng with transparent objects in the scene night impossible - then there is the issue that, because the layers are slightly offset, from an oblique angle, you will actually see them separated. Not only that, because sorting is done based on the center of the bounds of the mesh, a small distance might need to be bigger depending on the precision of the sorter in the render-engine. Of course, as before, you can fake the system by shifting the center towards the camera. Alternatively, again, you can displace the vertices away from the camera (each vertex in the direction of the ray from that vertex to the camera), after all, an object that is twice as far away, but twice as large looks the same on screen - so if you properly displace each layer they would properly render on top of eachother without visible displacement, nor sorting issues as depth test on transparency is disabled, the last layer to be posted is the last one to be rendered.

Of course, there is another problem still. The whole idea would only work for scenes that don't extend beyong their 'screen' - no scrolling stages, objects sliding our of frame, rotating or zooming beyond its boundaries, because the area to be projected on is definitely not rectangular, and you don't have a softclipper so those objects would float outside their projection area - so you're have to resort to render-to-texture after all.

So can Futile mix with a 3D scene? Yes, when the Futile elements themselves are project orthographically, no problem. Can Futile be projected on a 3d surface with Unity Free? Yes, with careful use and knowledge of it's limitation in that scenario, and some serious exploration - it can. Can Futile be projected on a 3d surface with Unity Pro? Yes, of course - render-to-texture and the world is yours.

Hope this all made sense, and feel free to prove me wrong!

bizziboi commented 11 years ago

Argh, should have said 'it will work well as long as Futile sprites are billboarded facing the camera you can mix it well, it doensn't have to be orthographic.

MattRix commented 11 years ago

Great stuff, very thorough, thank you! This is basically how I assumed most of this stuff worked, but it was really good to have you confirm it. I'm gonna do some tests (hopefully this weekend) to switch the main Futile code to use that z-based approach, I think I can make it work now.

Oh and yeah for using Futile in a 3D scene, I think I'll basically recommend that people use RenderTextures to do it, and then I may have some simplified way for people to pass raycasted touch/mouse positions into the Futile touchManager.