Closed hyperknot closed 10 years ago
In my opinion, it would be better to fix the issue at its heart: the internal representation of a texture should always respect OpenGL's lower-left corner, but when drawing or writing the image to a file (external representation) it should be flipped to an upper-left representation as expected. This would make handling textures and FBO's more transparent.
If you want, Andrew, I can attempt to make this fix in the dev branch (v0.8.6).
Just my 2 cents on this. Since we're about to introduce the world of DirectX into the Cinder arena - we should be mindful of the nuances of the DirectX and OpenGL with respect to this. As with everything else in Cinder, I feel like the spirit of it "should do the right thing" be maintained. Unfortunately the opposing standards makes that difficult.
Paul - I completely agree with what you're saying. I've had to wrestle with this a fair bit in production. My suggestion (Andrew has suggested this recently as well) - is to introduce the concept of an origin and make this a queryable attribute from the client code. I think internal code can look at this and determine what the "right thing" should be - and all it works out. This should reduce the confusion somewhat. New users might still be caught off guard a bit by this, but I think it wouldn't take more than an a one sentence explanation to bring them up to speed.
That's a good idea, Hai. But what would the default origin be for DirectX targets? And for OpenGL? Would it always be "upper-left"? If so, wouldn't that make the origin setting kinda useless? And if the origins would be different depending on the underlying technology, wouldn't that be confusing? Lol, this is a tougher design decision than I thought it would be. Personally, I would prefer if Cinder always uses an "upper-left" origin when accessing textures and render targets through Cinder methods.
This stuff can be a bit of a rabbit hole.
DirectX's default orientation is upper left. Almost everything in DX is 'opposite' to OpenGL. That being said, DirectX is a bit friendlier about interoperating with OpenGL than the other way around.
From what I've discussed with Andrew - OpenGL would be the Cinder's leading platform. So the defaults will align to whatever Cinder decides the default should be. DirectX will follow that. The origin has a few purposes 1) Users can specify where the (0, 0) point is 2) Image loaders and writers will have a point of reference to work from and can determine if they need to be flipped or not 3) Users can query at runtime to see if an image needs inversion. There may be a 3rd party library that creates textures a certain way. Or there might be some oddity in the behavior of the hardware - that may need to be accounted for. 4) If the user chooses to, they could force the origin to be either upper/left or lower/left. There are cases where having the OpenGL lower/left origin can be useful - such as debugging render targets. Just one less thing to think about.
One of the existing problems is the chasing of the flipping origin when you render to an FBO and then use it as a texture for another FBO. For some novice users - this is a headache. It's not always obvious that this is the problem. The ability to lock and origin into the place and the underling system knows how to deal with things relative to that origin would make things a bit easier.
I've also ran into problems inside a shader when I need to know the direction of something and I just have to guess until I get it right.
Two cents from me:
If you make the origin settable, you are going to end up having to write code everywhere that checks whether a flip is necessary or not, probably right down to the shader level, where all shaders will need to multiply texture coordinates by a texture matrix to allow for an offset and flip. Mesh reading code, texture reading code, shader code... all we be affected. And don't forget tangent spaces for normal maps (which coordinate system am I in again, and what's the transform?) If you choose a straight forward convention that everything must adhere to, it will be far easier to ensure that all code is correct from end to end. Jim Blinn used to say that computer graphics is all about achieving an even number of sign errors, but unfortunately, Y flips make it a lot worse than that, IMO.
In my experience, there's no advantage to bottom left in UI code. When projects adopt that for a UI, code can turn into a nightmare of (height - y) calculations, and trying to figure out which UI elements are contributing to height.
The conventions I prefer -
Texture coordinates on meshes bottom left (0,0) - this is the Maya convention (and pretty much everything else) textures stored upside down so that a photoshop file mapped in Maya looks right (matching the BL texture coordinates) Screen coordinates top right (0,0) to match all UI toolkits in existence, so that layouts generated in tools can be trivially translated. Convenience drawing routine for 2d mapped rectangle is done with an offset/flip texture matrix so it appears upright FBO/PBO coordinates to match textures so they can be interchangeable in drawing code
Nick is saying what I meant to say, only better worded :)
By the way, I found an easy work-around for the flipped texture issue. It was especially confusing when rendering to an FBO and then later drawing the FBO's texture. By setting them to "flipped" at creation, drawing the textures is no longer confusing. See: https://github.com/paulhoux/Cinder/commit/16fd63bfc9d7d62dbee0c9ee7f7183d13ba80d8d
My solitary cent: It is annoying to draw 2d gl with y flipped and 3d gl with it regular. I like following the norm of the renderer you are currently using - an abstraction above the renderer level (such as UI code) is where one should consistently use the same coordinate system.
Second cent, Hai's mentioned solution could cover the case if one ever wanted to try to reuse the same cover for, say, both gl and dx - here one could override the default origin and set them to be the same
The circumstance under which I wouldn't argue about a flip is if the flip is implemented as a per-object texture matrix (whose default is identity). (Not only would I not argue, but I would like it ;) Then there are no heuristics, and there is no need to recurse up a pipeline to discover what needs to happen since the transformations would merely be carried forward via multiplication.
I believe this has been addressed with Texture2d::Format::loadTopDown() in glNext branch, so closing this.
Now, since Cinder is undergoing graphics rewrite next gen, why not make FBOs finally render with the right orientation?
With DX support there is no point being forced to OpenGL crazinesses.
Connected issues:
Connected forum thread: https://forum.libcinder.org/topic/pixel-perfect-cinder