Closed parasyte closed 8 years ago
Yup agreed. I think it should be simple enough to put in the migration guide to change angle/scale/alpha on an ObjectEntity to invoke on the entities' renderable. Plus, we could also change the constructor to apply it to the renderable, if someone sets those properties in tiled.
:+1:
Might as well write this here. Was trying to think of how to implement this. Since the SpriteObject does not call parent renderable draw, and it does all the rotation stuff, it is able to wrap that around it's call to the context.drawImage(). The AnimationSheet simply invokes the SpriteObject Draw, and does not implement its own. It just simply updates the image to be drawn each time.
So If we want to put the context save, followed by the alpha, rotation & scaling followed by the context restore, there needs to be an execution between those.
draw : function(context) {
context.save();
context.globalAlpha = 0.5;
// draw logic to happen here
context.restore();
}
Because of that, the sprite object can't just invoke this.parent()
. One way might be to do the actual draw image call in another method on the sprite object, and pass that to the parent:
drawImage : function(context, x,y) {
context.drawImage(this.image, x, y, this.width, this.height);
},
draw : function(context) {
this.parent(this.drawImage);
}
But I'm not sure if that might have other issues or drawbacks.
My Concern with it is how does one subclass Renderable and draw? I imagine we could put the setup
phase of the draw in one method, and the restore in a teardown
.
setup : function(context) {
context.save();
context.globalAlpha = 0.5;
},
teardown : function(context) {
context.restore();
},
draw : function(context, exec) {
this.setup(context);
typeof exec === 'function' && exec();
this.teardown(context);
this.parent(context);
}
Then extend like so:
var MyDrawer = me.Renderable.extend({
init : function() {
this.parent(new me.Vector2d(100, 100), 100, 100);
},
draw : function(context) {
this.setup(context);
context.fillStyle = '#f00';
context.fillRect(this.pos.x, this.pos.y, this.width, this.height);
this.teardown(context);
}
});
Though the issue with that is you can't call the Rectangle's draw method.
I like the idea about:
draw : function(context) {
this.parent(this.drawImage);
}
I think it is a nice way to inject more code in the draw
routine. From what it seems, it'd also be backwards compatible in contrast to the setup
and teardown
concept?
I was thinking the setup/teardown would be needed if you want to subclass renderable, but maybe not.
drawStuff : function(context) {
context.fillStyle = '#f00;'
context.fillRect(0, 0, 100, 100);
},
draw : function(context) {
this.parent(context, this.drawStuff);
}
Yeah, probably the only question that remains (which setup
and teardown
maybe would have solved) is when to execute the injected function. At the beginning or at the end in the parent draw
call? Or maybe configurable?
After the calls to alpha, scale and rotation is how I see to approach it. Then either do a restore() or undo those values after the passed function is called. Be sure to invoke rect#draw
as well. The renderable does not really implement much as far as drawing is concerned right now. The only logic it will have is the logic I want to move into it, which is the a/a/s values.
just thinking out lout, but another way to tackle this would be through this texture object i was mentioning in another ticket or in the forum.
iany object would render to a local texture (canvas), and the base renderable would then render the texture on the final canvas, applying any required transformation.
Hmm, interesting. How would the renderable know about that canvas? Would it be simply passed an id or something?
Would managing many texture canvases be an issue from a dom/memory perspective?
I might be misunderstanding, if so, maybe just dish out some quick pseudo code or somethin :)
i guess that every renderable objet would hold a texture object that inheriting object would render on ?
but yes I guess this would impact performances somehow, although it might however help for composed sprites ? (i'm thinking object container, or other complex composed objects like spine)
This is where WebGL would be really useful, haha. With transformation matrices and such. The parent renderable having a public property would make implementation easy though.
Maybe it's a matter of having both ways to approach it, have the programmer decide what is best?
Actually the texture idea is really easy to use; draw already accepts a canvas context. So pass the context of the local texture instead of destination screen.
I think it might be a bit memory intense, and will add some CPU cycles to do the final drawImage to the screen context, too. But it is an interesting idea.
So far the callback idea for draw sounds the most interesting to me. Everything we can do to limit the total number of canvas calls will be useful. :)
Note that more complex transform was the intention behind the following branch : https://github.com/melonjs/melonJS/tree/matrix-based-transform
But it's still in early alpha stage :)
so with this new matrix2d class, we could :
what do you think guys ?
I'm confused, as usual when it comes to matrices.
Is this what you mean with point 2? objectA receives "Hey do this matrix transformation of [0,10],[3,0]". The container would them check for such requests in the update?
For point 3, do you mean to use the setTransform based on the current matrix for drawing, vs the position translation that we currently do?
to be honest i'm not a guru myself in transformation matrix handling/manipulation, but you can just see it as an array of value specifying the translation, scaling, skewing and rotation information.
so if we add a new transform
property (containing a specific transformation matrix) to each renderable, we can then apply it for the main loop just before drawing the renderable.
the only drawback is however performances, as from initial test in the corresponding (now removed) branch, it had some impacts, but i did not take any shortcut like doing nothing if the matrix was the default identity one (which then save from a systematic context.save()
+ transform.apply()
+context.restore()
)
that just makes me realise that we should add a setTranform
and transform
function to the Matrix2d
object !
that would allow to do transform.transform(context)
instead of an ugly :
https://github.com/melonjs/melonJS/blob/1.0.0-dev/src/renderable/container.js#L683
small up on this one as I would like as well to discuss/finalize the discussion we started on :
me.Renderable
translate
, rotate
, scale
function in me.Renderable
to update the transform matrixnot sure if this would be faster though, but I think that Jason was saying (if I understood correcly) that if we only apply the same transform each time in the same order this could be optimized with the webGL renderer.
To clarify, what I was saying about WebGL is that the all of this matrix math can be put in the shader, as long as we are not exposing the functionality to apply arbitrary transformations. As we have today, we only expose properties: pos.x
, pos.y
for translation, angle
for rotation, and a scale
method that simply sets a vector. Because setting these properties does not actually apply a transformation, the math is deferred to the draw
method, where the order of operations is always static. translate
is always first, followed by scale
, and finally rotate
: https://github.com/melonjs/melonJS/blob/b528d6056f4871d7ecad9e8152d4a88fb5ab762b/src/renderable/sprite.js#L284-L294
Now that it is established that the order of operations do not change, we can move this right into the shader! The methods are already called on the renderer, so the renderer just needs to add these numbers to the current batch, and let the GPU handle the math.
As for why it's useful to have a matrix library in JavaScript? Only for arbitrary transformations. That only means applying transformations in differing orders. You might do this for example to animate a 3D model (or 2D Spine) where the entire model is treated like a tree, and you push/pop the transformation matrix on a stack while drawing each texture of the model.
That is hard to do in a shader because the shader doesn't necessarily know how the model is structured, or how or when to manage a transformation matrix stack. Shaders are usually low-level, and do not contain such knowledge or functionality. (But they can also be application specific for these kinds of things.) The usual shader is just a stream processor; taking whatever information it receives from the CPU, and does a bunch of computations on it.
One way around that limitation is by feeding full transformation matrices to the GPU for each item. It's a bit expensive, because a 2D matrix is composed of six 32-bit floats, while the values alone can simply be sent as five 16-bit ints.
Annnnnnnyway, that's what I can contribute to the conversation on the WebGL side.
considering that we did not really agree here on how to properly do it, I'm moving it to 4.0.0 as well (won't likely happen on 3.0.0)
hi guys,
I was looking back at this one and i really wanted to propose the following to address this ticket :
setTransform(scaleX, skewX, skewY, scaleY, posX, posY)
: which provides a convenience function that mimic the standard functiontransform(scaleX, skewX, skewY, scaleY, posX, posY)
, same as above, but add to the already defined transformations instead of resetting them.scale(x,[y])
(from `me.Renderable) rotate(angle)
: that replaces the properties angle
we had in `me.RenderablehasTransform()
: returns true if any transform was applied to the objectgetTransform()
: used by renderers to apply transform defined by a renderableAll related properties would then be stored in a mat2d object, allowing then to easily pass back the same object to the Canvas Renderer or WebGL Renderer transform
function.
with that, any class inheriting from me.Renderable, just have to focus on drawing whatever they need without having to call something else or a parent function or whatever. As the main loop in the container object would then be in charge of applying any defined transforms before calling the object draw function, so something like that :
for (var i = this.children.length, obj; i--, (obj = this.children[i]);) {
hasTranform = obj.hasTransform() || obj.floating;
if ((obj.inViewport || isFloating) && obj.isRenderable) {
if (hasTranform) {
// translate to screen coordinates
renderer.save();
renderer.resetTransform();
}
// draw the object
if (hasTranform) {
renderer.transform(obj.getTransform());
}
obj.draw(renderer, rect);
if (hasTranform) {
renderer.restore();
}
this.drawCount++;
}
}
to add to this, we do not need to absolutely use a mat2d object within me.Renderable, but at least we need the getTransform()
function to returns one so that we can use the renderer transform()
function (looks like the most straightforward way to me)
would be nice to also mimic CanvasRenderingContext2D .currentTransform
for this one :
https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/currentTransform
and I think we had a ticket for it somewhere, but we could also align our matrix API with the SVG one : https://developer.mozilla.org/en-US/docs/Web/API/SVGMatrix
or at least add the flipX/Y
, skewX/Y
, etc... functions
got this baby started ! https://github.com/melonjs/melonJS/commit/eb7cf7319423376b37bf9f386781ea68da7b3abc
so far, working as previoulsy, will keep trying to optimize our code this week :)
@parasyte @agmcleod
I managed to greatly simplified the draw function :) https://github.com/melonjs/melonJS/commit/c30e386a387bfd44e329d23f91e93998c3ce4320
it looks pretty cool and simple now ! https://github.com/melonjs/melonJS/blob/ticket-335/src/renderable/sprite.js#L281-L310
and so far, no regression that I could see compared to the previous implementation (including cumulating both a rotation from texture packer and a regular rotation/scaling)
i start to really like this ticket :P
next step I'd like to apply the transformation outside of the draw object, and move it to the main loop (like I was trying to explain it in the above pseudo-code)
so cool, this code :
game.PlayScreen = me.ScreenObject.extend({
/**
* action to perform on state change
*/
onResetEvent: function() {
//create a new level container
if (typeof this.levelContainer === "undefined") {
this.levelContainer = new me.Container();
this.levelContainer.name = "levelContainer";
}
// load our level
me.levelDirector.loadLevel("map1", {container:this.levelContainer});
// apply some random transform
this.levelContainer.transform.translate(this.levelContainer.height / 2, this.levelContainer.width / 2);
this.levelContainer.transform.rotate(0.1);
this.levelContainer.transform.scale(1.2);
this.levelContainer.transform.translate(-this.levelContainer.height / 2, -this.levelContainer.width / 2);
// add it to the game world
me.game.world.addChild(this.levelContainer);
[...]
now does that :
@parasyte i'm pretty sure it was already discussed somewhere else, but when scaling on the level rendereing, the result is way better with webgl:
with the canvas renderer, you can see those tiny lines between each tiles, where there is nothing with webgl
So nice! βΊοΈ This is actually one of the primary reasons for loading maps into a container. The other reason is to have multiple maps loaded simultaneously. Good work!
... Now if only the background layers would render appropriately ...
holly molly ! i did not even notice it :P
@parasyte and by any chance do you remember that discussion about scaling and result differernce between canvas and webgl ?
I think after that ticket I will also work on the alpha issue (where we also have a difference of behavior between the two renderers). to at least provide the option for the canvas one to act as the webgl one (and maybe makes it the default option)
@parasyte applying the same transformation to the created pattern is gonna be very challenging.... https://developer.mozilla.org/en-US/docs/Web/API/CanvasPattern/setTransform
the only way would be to revamp the imageLayer object to render to an offscreen canvas, to which then we can "safely" apply a transformation. Looks like a bit ugly to me and completely defeating the initial purpose of using create/drawPattern. (#797)
@obiot This is the comment/ticket that describes some of the differences between WebGL and Canvas: https://github.com/melonjs/melonJS/issues/783#issuecomment-197662230
Looks like CanvasPattern.setTransform
is only supported in Firefox. :cry: But at least we can do it on all browsers in WebGL! π The pattern object is just a plain ol' texture, and we can transform textures as we please.
i think we manage properly already the pixel alignement when rendering, the issue here appears (only) when the canvas is rotated, which then produce some kind of aliasing on the edge of each tiles with the canvas renderer, but not with the webgl one. See the first picture vs the second one
Yeah, I saw it ... WebGLRenderer doesn't exhibit the texture seam behavior because we use the NEAREST
texture filter. I assume if you enabled antiAlias in WebGL, you would see the seams.
I don't remember seeing the pixel alignment fix, though? Maybe that changed something in Canvas enough to create the seams.
indeed yes, with antiAlias enabled, I can see the same behvior with webGL, But i just spent the last hour trying to round pixel wherever possible with a combination of x | 0
or ~~(0.5 +x)
, and i could not get it right with the canvas renderer.... i'm kind of lost on where it's missing
The typical solution for texture seams with anti-aliasing enabled is making the textures slightly larger (like adding a 1-texel border around the quad) so that there is some slight overlap.
changing the drawTile
function as follow indeed fixes the issue with the Canvas renderer :
// draw the tile
renderer.drawImage(
this.image,
offset.x, offset.y,
this.tilewidth, this.tileheight,
dx, dy,
this.tilewidth + 1, this.tileheight + 1
);
but it looks more like a workaround (by adding "padding") rather than a proper fix
i could not see any artefacts/side-effects, when the tilemap is not rotated though
You can also use non-integer size like + 0.5
. That should also fix the seams, without introducing extra aliasing.
i tried with 0.5
there is still a seam :P I guess I would commit that change then, what do you think ?
I would say commit it, but only for CanvasRenderer
.
i will sleep on it first, and do some google search tomorrow before doing anything :)
so I came up with this fix, trying to keep it as "generic" as possible. if you could give it a quick review :)
i still don't like it though, but after 2 days on it, I really have no clue on how to make it better.... except than rendering the layer offscreen and then transform it (which is not as well a good idea)
we really need to push our webgl renderer forward at a point where it can replace the canvas one.
we need you Jason !!!!
I give it :+1: for review. It's not ideal, but definitely good enough when optional.
The WebGL compositor needs a slight redesign to work with mobile UAs π . It will be a performance hit on Desktop, sadly. But I guess worth it for higher compatibility.
cool, thanks :)
so this branch is almost ready for merging then. API wise it still (almost) provides backward compatibility, and the one thing left I want to add is a autoTransform
attribute that will automatically translate the renderable based on his anchorPoint and apply any define transform. Somehting like the follwoing in the main container loop :
for (all child in container) {
[ ... ]
if (child.applyTransform === true) {
child.translate(anchorPoint);
renderer.transform(child.transform);
}
child.draw(renderer);
[ ... ]
}
the flag being ON by default in me.Renderable
, but then turned back off by default (at least in the beginning) in all built-in object to avoid breaking further things.
so guys, I'm ready to merge this one. There is still some work to do as some of the melonJS pre-defined object still need some update to work with it (so the feature is forced to disable for those), but in the current state, it does not bring any regression, and works just fine for new objects created in a game (it works beautifully in my current game).
so one month later, it's time to merge this baby, I'm doing this today as i"m back home until tomorrow (then I can elope again until next week!)
For discussion: I think the me.Renderable base class should be responsible for all drawing, including scaling, rotation, and opacity/alpha blending.
I'm indifferent about keeping backward compatibility by continuing to support these properties on ObjectEntity.