Open andre-caldas opened 3 years ago
Nice writeup, most of it is good information and discussion.
I would just suggest Node2D to be a (protected) subclass of Transform2D.
No, this won't happen. Transform2D instances are and will always be members of Node2D.
But they are called elements[0], elements[1] and elements[2]
Basis stores them transposed internally. So for basis.x
and basis[0]
, x is the x value of elements[0]
, y is the x value of elements[1]
and z is the x value of elements[2]
. Note that Transform2D does not do this, it stores columns. Also, Transform (3D) stores a Basis and an origin column, so it stores 3 rows (from Basis) and 1 column internally speaking.
Also note that currently transforms and Basis are displayed transposed, which can be confusing. Fixed in 4.0.
That is, vectors are columns???
Yes, this is the standard, it's correct, and won't be changed. The vectors of a basis are columns in a matrix (mathematically speaking, not in terms of how Godot stores them in Basis).
Give the user direct access to Transform3D::basis
You can already do this: transform.basis
is the Basis
.
Dear @aaronfranke,
Maybe I should have written lots of "smaller sub-proposals". I do think that this proposal is big and vague... I hope it sheds some light on the "big picture". Discussing everything at once is not practical. So, I will make smaller proposals and link to this one here. IMO, here is a good "roadmap", though.
But since I need to make some comments, here comes another long post. Sorry! :-(
Please, read at least up to the bug I point out. :-)
No, this won't happen. Transform2D instances are and will always be members of Node2D.
Yes. I think it is a very good decision. :-)
I did suggest sub-classing because Node2D
ends up re-implementing or "tunneling" methods to manipulate Transform2D _mat
. Node2D
has rotate()
, move_x()
, move_y()
, translate()
, get_rotation()
, etc, etc... and those are methods that essentially call the correspondent Transform2D
corresponding method. It seems that we want to treat Node2D
as a Transfrom2D
(and by this, I mean a local world where you can draw using local coordinates that get transformed to the outer world coordinate system)... but we don't want it to be a sub-class.
I think either decision is good. My main concern about Node2D
is that its integration with Transform2D
is not good. Instead of exposing Transform2D
interface, it keeps track of angle
, pos
, skew
, _scale
, and it does a huge effort to keep this redundant information in sync with _mat
.
For example, translating a Transform2D
is very easy, you just have to add two vectors. That is, two additions. But if you call Node2D::translate()
, not only you have to check the redundancy is "clean" (_xform_dirty
), but also the following computations are done:
pos
is updated.set_rotation_scale_and_skew
is called. This corresponds to at least:
pos
is copyied to _mat.elements[2]
.Rotation is, in principle, also very easy to compute. All you need, computationally, is to calculate one sine and one cosine, and multiply two 2x2 "matrices". In Node2D
, however, when you call Node2D::rotate()
, the following computations are done:
angle
is updated.set_rotation_scale_and_skew
is called. This corresponds to at least:
pos
is copyied to _mat.elements[2]
.
And if you make a translation followed by a rotation, you get it all done once for translation and once for rotation! :-(If you are just doing thousands of translations of dozens of Node2D
... uau!
This is what I mean when I say "integrate Node2D
and Translation2D
".
But do you think the final result is correct? Then, do this:
var node = Node2D.new()
node.transform.x = Vector2(1,0)
node.transform.y = Vector2(1,1)
print("Node.transform.x: ", node.transform.x)
print("Node.transform.y: ", node.transform.y)
node.rotate(0)
print("Node.transform.x: ", node.transform.x)
print("Node.transform.y: ", node.transform.y)
And you will get...
Node.transform.x: (1, 0) Node.transform.y: (1, 1) Node.transform.x: (1, 0) Node.transform.y: (-0, 1.414214)
You rotate those axis zero
, and look at what happens to node.transform.y
!!! This bug is the result of not using Transform2D
properly. This is what this is all about.
That is, vectors are columns???
Yes, this is the standard, it's correct, and won't be changed. The vectors of a basis are columns in a matrix (mathematically speaking, not in terms of how Godot stores them in Basis).
I am not complaining about vectors being columns. I do think that when you use matrices you should, in general, think as vectors as columns. I only know of probabilists that like to use rows in what they call a stochastic matrix. But probably they do so because probability vectors are in fact linear functionals that calculate the pondered average of a given vector. This is called "duality". :-)
By the way, I didn't want to mention... I do not want to make arguments of authority... but as a matter of fact, I am a professor of mathematics at the university.
What I am complaining here, is that if the vectors are columns, then Basis
, as its name suggests, should not be concerned about the rows. What makes the Basis
are the columns of the matrix. Nonetheless, elements[n]
are what you all want to call row!!!
Basis displays them transposed internally. So for basis.x and basis[0], x is the x value of elements[0], y is the x value of elements[1] and z is the x value of elements[2].
I am not sure what you mean by "displays". But as far as I understand, if x = elements[0]
, y = elements[1]
, etc... then, x
, y
and z
are NOT the basis!!! They are gradients... covectors... dual vectors!!! They are something whose meaning is much harder to grasp. I really doubt 1% of the users are interested in the rows. Someone dealing geometrically with Godot is probably interested in the columns.
Notice that Basis(a, b, c)
where a
, b
and c
are vectors, does NOT construct a Basis
where the vectors a
, b
and c
are the basis vectors!!! Look at how Transform2D
constructs Basis
instances:
Transform::Transform(const Vector3 &p_x, const Vector3 &p_y, const Vector3 &p_z, const Vector3 &p_origin) :
origin(p_origin) {
basis.set_axis(0, p_x);
basis.set_axis(1, p_y);
basis.set_axis(2, p_z);
}
It would make more sense if it was some thing like: (I do not code C++ for decades, sorry for any mistakes)
Transform::Transform(const Vector3 &p_x, const Vector3 &p_y, const Vector3 &p_z, const Vector3 &p_origin) :
origin(p_origin), basis(p_x, p_y, p_z) {}
You don't need Basis
to be a matrix!!! It just make things loose their meaning.
In Basis
, no one (yes, I know some people are) is interested in rows... look at the definition of get_axis()
:
_FORCE_INLINE_ Vector3 get_axis(int p_axis) const {
// get actual basis axis (elements is transposed for performance)
return Vector3(elements[0][p_axis], elements[1][p_axis], elements[2][p_axis]);
}
I wonder where is the "performance gain" that the comment mentions. I would make more sense to do:
_FORCE_INLINE_ const Vector3& get_axis(int p_axis) const {
return elements[p_axis];
}
But for some reason, people insist that for elements[n][m]
, n
is the row and m
is the column. As if it was written in stone. Then, you are stuck with row vectors!!! :-(
Note that Transform2D does not do this, it stores columns. Also, Transform (3D) stores a Basis and an origin column, so it stores 3 rows (from Basis) and 1 column internally speaking.
Yes. Transform2D
does not do this. It seems there were a design decision that chose vectors and not "dual vectors" (or covectors) as their main variables.
But if you call Node2D::translate(), not only you have to check the redundancy is "clean" (_xform_dirty), but also the following computations are done:
Yes, this is a very good example and a very strong argument in favor of using Transform2D directly instead of all these wrappers.
I am not sure what you mean by "displays".
I swear I wrote "stores", but apparently I accidentally wrote "displays" instead. I meant "stores". Post edited.
I really doubt 1% of the users are interested in the rows.
Yes, indeed people mainly use columns. This is why columns are the exposed behavior instead of rows. I think the reason rows are stored internally is for computational performance optimization reasons, not for math reasons.
Very good post. Except for the Node2D to be a subclass of Transform2D, I agree with most of what has been said.
As someone who recently had to rewrite the core types in Kotlin from C++ for our Godot-Kotlin module. I was also very confused by the way vectors are stored in Basis/Transform as sometimes they are rows and sometimes columns.
Most users won't ever need to go check that part of the code so if doing this way is more efficient, I think it takes priority.
But it becomes an issue when exposed constructors like Basis(a, b, c)
exist because it is really confusing, you would expect them to be the 3 vectors. I did that mistake several times already before understanding that it was transposed internally.
I have made a specific proposal for Node2D
: #2742.
A few years back, I tried to improve the basis/quat, but there are still inconsistencies and under-documented/undocumented gotchas.
In Godot, vectors are column, I see nothing problematic about it. One can work in the dual space of row vectors, but one ask the same questions there as well. In any case, in all physics literature (I'm a physicist), vectors are column by "default" (and states are kets). Ancient OpenGL had row vectors which was extremely weird to me.
The way data is stored internally (row vs column major) is messy. I tried changing it, but didn't happen.
The naming of Basis is slightly deceiving. It's an RS matrix (scale, then rotate), which is of course different from a SR matrix (rotate, then scale) since these don't commute in general. The name Basis would work fine for both of course, since we can interpret both (RS)*e_i
and SR*e_i
(which give the columns, e_i are unit vectors along ith direction) as the basis vectors of the frame associated with that transform. But Basis is strictly an RS matrix as in Blender etc, because when you rotate a "tall" model 90 degrees, you're not supposed to get a "fat" model. You can of course construct an SR matrix by hand, but things will break because code internally assumes it is RS (and in some instances just R) for speed.
Similarly, a Transform is strictly TRS (and not a general affine transformation). For this reason, basis and transforms don't compose the way you wrote, since the product (R2.S2)(R1.S1) cannot be decomposed as R.S in general (or if you like, Basis doesn't form a group). The way transforms are composed in Godot is through parenting.
As for representation of these operations, at, some point, I suggested to internally represent a Basis as a quaternion and store 3 numbers for scale part (which seems to be the case in Unity), in order to preserve orthogonality in the presence of numerical errors, which becomes relevant as many rotations are applied sequentially. But this was also shot down because then, obtaining the transformed basis vectors B*e_i becomes an expensive operation.
Similarly, a Transform is strictly TRS (and not a general affine transformation).
Isn't it, though? Shearing is supported if one edits the matrix directly, and works fine (even though it looks weird):
at, some point, I suggested to internally represent a Basis as a quaternion and store 3 numbers for scale part
I don't like this idea. If we want to ensure orthogonality, we can always call orthonormalized()
(maybe there should be a non-normalized version?). But I don't see why we need to, it works great already as-is with just allowing any arbitrary Basis.
@tagcup2: [...] in order to preserve orthogonality in the presence of numerical errors, which becomes relevant as many rotations are applied sequentially.
This is a very important piece of information!!!
@aaronfranke: [...] it works great already as-is with just allowing any arbitrary Basis.
I totally agree with that. Not everyone wants to keep things "right"! :-)
Sorry for the joke! I think you should be allowed to pick up you own basis! Just like I wrote in the beginning of this issue: pick up a point and a pair of vectors (I am still 2D).
Forbidding non orthogonal basis is just like eliminating the sheer tool from Gimp! :-(
@tagcup2: In Godot, vectors are column, I see nothing problematic about it.
When I started this conversation on "rows" and "columns", I was not advocating for rows. I just think it is kind of pointless to discuss how it looks like when you write the table on a piece of paper.
Inside the computer, there are no rows and columns!
What you have is a list of tree vectors. Not rows, not columns: vectors! So, you have those three vectors... and you want to store a basis. That is, you want to store three vectors (a,b,c)
, (x,y,z)
, (p,q,r)
. Then, you opt to store them like this: (a,x,p)
, (b,y,q)
and (c,z,r)
!!! Unless you have a very good reason to do so... it is very weird!
Inside the computer, there are no rows or columns they are just in our heads.
If you check basis.cpp
, you can see that the original basis vector have to be reconstructed all the time. All because of this weird choice of storing the bras instead of storing the cats!!! In a computer, if you want to store bras, or if you want to store kets, it is irrelevant. One 3d bra is a sequence of three real numbers. One 3d ket is a sequence of three real numbers. There are no rows or columns.
Look at this code:
void Basis::orthonormalize() {
// Gram-Schmidt Process
Vector3 x = get_axis(0);
Vector3 y = get_axis(1);
Vector3 z = get_axis(2);
x.normalize();
y = (y - x * (x.dot(y)));
y.normalize();
z = (z - x * (x.dot(z)) - y * (y.dot(z)));
z.normalize();
set_axis(0, x);
set_axis(1, y);
set_axis(2, z);
}
If you store the basis vectors (a,b,c)
as (a,b,c)
, instead of storing (a,x,p)
, the code would be:
void Basis::orthonormalize() {
// Gram-Schmidt Process
x.normalize();
y = (y - x * (x.dot(y)));
y.normalize();
z = (z - x * (x.dot(z)) - y * (y.dot(z)));
z.normalize();
}
So, you are right... vectors being columns is not bad. It is in fact a very good thing! But it is all in your head. All you have to do is think of a column whenever you see a Vector3
.
But splitting the vectors in pieces and having to assemble them back every time is a problem, in my opinion.
But anyway... having read what you said about "keeping orthogonal", I think you are very right! (pun intended!)
So... if you do a million rotations, you want the original angle between the two vectors to be preserved! And more then that... if you do a (2 PI / 1.000.000) rotation a million times, you want to end up pointing the same direction you where when you started!!!
I will talk 2D, because it is much easier. Let's imagine a Basis2D
object.
So, if we represent a Basis2D
with two angles and two sizes...
x_length
and y_length
: sizes of x
and y
.x_to_e1
: angle from (1,0)
to x
(counterclockwise).y_to_orthogonal_to_x
: angle from x
to y
minus PI/4. (please, think of a good name)Then, we would have no propagation of errors, because we do not need to calculate sines or cosines, we don't have to multiply many tiny little numbers. And it would be very easy to interpolate as well.
Some examples:
void Basis::orthonormalize() {
x_length = 1;
y_length = 1;
y_to_orthogonal_to_x = 0;
}
bool Basis::is_orthogonal() const {
return (0 == y_to_orthogonal_to_x);
}
bool Basis::is_orthonormal() const {
return (0 == y_to_orthogonal_to_x) && Math::is_equal_approx(x_length, 1, UNIT_EPSILON) && Math::is_equal_approx(y_length, 1, UNIT_EPSILON);
}
void Basis::rotate(real_t p_phi) {
x_to_e1 += p_phi;
}
Shall I make a Basis2D
proposal? It could be a model for a future Basis3D
class. Of course, rotation in 3D is a bit more complicated.
@andre-caldas If a Basis2D
type existed, it should be just like the x
and y
vectors in Transform2D
(like the 3D Basis
). None of this length and angle business is necessary or desired, storing the basis vectors directly is what's wanted. When doing operations such as composing transforms or moving objects, matrix multiplication is exactly what is wanted.
@aaronfranke: None of this length and angle business is necessary or desired, storing the basis vectors directly is what's wanted. When doing operations such as composing transforms or moving objects, matrix multiplication is exactly what is wanted.
I want to start with a disclaimer: When I teach linear algebra, I always say to my students:
So, I hated it, when @tagcup2 convinced me that angles might be a good thing. :-)
I am not advocating for angles! I do prefer to keep only coordinates and matrix multiplication. However, I'd like to make a little mental experiment:
This is what we do with computers, though. :-)
It is not a bad thing, if it causes no harm. And in order to evaluate the problems this may or may not cause, we need to experiment. I made a little experiment:
var t_template = Transform2D()
var t
var n = 100 * 1000
var angle = 2 * PI / n
print("Rotating 2 PI...")
t = t_template
t = t.rotated(2 * PI)
print("T: ", t)
print("Rotating 2 PI in pieces... ", n, " pieces.")
t = t_template
for i in range(n):
t = t.rotated(angle)
print("T: ", t)
And the result was:
Rotating 2 PI... T: ((1, 0), (-0, 1), (0, 0)) Rotating 2 PI in pieces... 100000 pieces. T: ((1.000197, -0.000002), (0.000002, 1.000197), (0, 0))
Of course, I had to make 100.000 rotations. But actually, the error appears more with some angles then others (because sine and cosine are not linear). Check this continuation of the experiment:
print("Rotating a little in pieces... ", n / 100, " pieces.")
t = t_template
for j in range(n/100):
t = t.rotated(angle)
print("Rotating back, at once.")
t = t.rotated(-angle * n/100)
print("T: ", t)
This is "only" 1.000 rotations. The result:
Rotating a little in pieces... 1000 pieces. Rotating back, at once. T: ((1.000002, 0), (-0, 1.000002), (0, 0))
It is important to notice, however, that we do not loose orthogonality. Well, the "minus sign" near the 0 tells me that we might have, a little.
Keeping the angle and making angle += angle_increment
does not propagate errors that much. Constructing the vectors "x_axis" and "y_axis" on demand is like using a ruler instead of a stick! You measure a long distance from the starting point, not a small distance from the last point. :-)
Again, I am not advocating for angles! I hate angles myself.
In any case, I will make a proposal for Basis2D
. I'd like to invite you to help. We can create two versions of Basis2D
, and get many people to test both versions.
Describe the project you are working on
I am just learning Godot. I want to use Godot to produce animations like in manim... in such a way that I can really play the animation. Like a game! :-)
Describe the problem or limitation you are having in your project
When manipulating
Node2D
objects, I feel thatNode2D
,Transform2D
and the non existantBasis2D
are not integrated one with the others.Transform2D
has many errors (like godotengine/godot#48712), andNode2D
does a lot that should be done byTransform2D
, instead.Node2D
also keeps track of a lot of redundant information, likeangle
,scale
,position
andskew
.Many methods in
Transform2D
are incorrect in the sense that there are assumptions that are not always true. Because of this, the documentation is very imprecise.Describe the feature / enhancement and how it helps to overcome the problem or limitation
Transform2D
,Basis2D
(does not exist, yet) andNode2D
should be refactored to better integrate them. Each one must have its own role, and those roles should not overlap. Implemented methods would have a precise definition/documentation describing its geometric and arithmetic meaning.GDScript API would also change to make it cleaner. The API would be keept easy and simple for non advanced users, but also powerful (through integration with
Transform2D
andBasis2D
) for advanced users.Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
I will describe it after answering all the mandatory questions. :-)
If this enhancement will not be used often, can it be worked around with a few lines of script?
It will be used very often. Of course any one can live without it, though.
Is there a reason why this should be core and not an add-on in the asset library?
Because it is a refactoring of some of the most basic native types:
Node2D
andTransform2D
.Very long description of the proposal follows...
This is a long report on my own impressions on
Basis
,Transform
/Transform2D
,Vector2
,Node2D
, etc. I have known Godot just for a few weeks. And I have just learned how to make an app that draws a polygon. So, I am sorry if I got many things wrong! :-)If this proposal gets interest, maybe it should become a wiki, because it is just a sketch, it is too big and it needs collaboration to reach consensus and be finished.
Mathematical classes, like
Transform2D
andTransform3D
need to be very well defined and mathematically correct. It seems that there is a lot of cargo culture involved in the use o these mathematical tools. I think that precise definitions and concise behaviour can avoid this from happening. Why do I do this an not the other way? Because I have tested it, and it gave me the result I was expecting. The user tends to adjust parameters in some sort of trial and error approach. I catch my self doing this a lot! :-)I was very surprised when I realized that
Node2D
, instead of simply usingTransform2D
, it usespos
(position),angle
,_scale
,skew
and aTransform2D _mat
variables! This is a lot of redundant information to deal with.I will talk about
Transform2D
since the 3D case is analogous. The first thing that needs to be stated clearly, is the fact thatTransform2D
represents a affine transformation of a plane.How to interpret it / motivation
This section is not the proposal. Most of what is here is already implemented. It is just an introduction to describe what it is all about. In special, what is the role
Basis
andTransform
in Godot. It is a long introduction... feel free to skip it!Imagine we start with an empty canvas. Or an empty sheet of paper if you prefer. In an abstract language, we have a plane. Let's call it
P
. I want to instruct you what to draw. We shall use a system of coordinates. So, we agree to pick up an arbitrary point and call itorigin
. Then, we choose two axis passing through thisorigin
, and call themx-axis
andy-axis
. Those axis have a scale each. Now, with two real numbers (a,b), we can specify a point in the plane. We can specify many points and "draw" on this plane.With a coordinate system, we can now use pairs of numbers (
Vector2D
) to refer to point in this planeP
. If you like math, what we have is a functionThat gives us a point in
P
for each pair of real numbers (x,y). This is whatTransform2D
is when seen from inside. You give aTransform2D
object, and it tells you "where" the point should be drawn on canvas.Now, that we have coordinates, I can instruct you to draw a triangle:
But then, I want to do this same triangle, but rotated 90 degrees counterclockwise about the
origin
. I can recalculate the points: (0,0), (0,1) and (-1,0).But imagine that we are not talking about three points. We are talking about a very complex structure made of many points. So, instead of rotating the points and giving a new array of points to you, I could simply instruct you to rotate the axis!
If we rotate the axis, the
Transform2D
as seen "from inside" does not change. But the outside... how it is presented to the canvas does change. By the way, we did not address the problem of how to specify neither theorigin
, nor the axisx
andy
. Even if we don't know yet how to specify this information, it is clear that we can move our drawing around just by changing the value oforigin
. We can scale our drawing (about theorigin
) by "stretching" the axisx
andy
.But how to specify
origin
,x
-axis andy
-axis? Well, if the canvas already had a coordinate system, it is easy.Even if the canvas already has a coordinate system, it is still worth specifying a new one. A coordinate system inside the global coordinate system. A local coordinate system allow us to design objects in a "standard" coordinate and then, sticking it to a canvas, through a
Transform2D
object. Ultimately, we need to transform those coordinates to coordinates that can be used by the hardware that will present the data to us. This is the reason whyTransform2D
has its name. If we specify the pointorigin
and the vectorse1
ande2
using global coordinates, as shown in the drawing below, we have a formula to take coordinates (a,b) in the local system and converting it to coordinates in the canvas coordinate system:This is called an affine transform. The part
v.x * this->e1 + v.y * this->e2
is the "linear part" of the transform. Whilethis->origin
sometimes called "origin", sometimes called "translation". From the local perspective, it makes sense to call it "origin". And from the outer perspective, it makes sense to call it "translation", because it is the amount you have to translate from the global origin.Now, if we want to transform our drawing, instead of changing the point coordinates, we can change the
origin
and theaxis
. If we translate theorigin
point to some other place, it is like translating the whole drawing. If we stretch someaxis
, it is just like stretching the drawing. If we rotate theaxis
about theorigin
, it is just like doing the same with the drawing. Also, you might want to do a rotation about some other point, different from theorigin
.This is exactly what happens when we have nested
Node2D
. You can specify the nodeposition
and two vectorsx
andy
that determine the axis. And to specify those, you can useNode2D
's parent coordinate system. So,Node2D
should have its coordinate system specified by means of an instance ofTransform2D
. It could be calledNode2D::transform
orNode2D::coordinate_system
.If
A
is aNode2D
,B
a nestedNode2D
andC
aNode2D
inside ofB
, then, in order to convert aVector2D v
representing coordinates inC
to coordinates inA
, all we have to do is to calculateInstead of calculating two transformations each time (imagine we have millions of vectors v), it would be better to do
Usually the GoDot programmer do not need to calculate these compositions. The GoDot engine will automatically do this for nested
Node2D
s.The basic operations a user might be interested in doing with a
Node2D
isscaling
,rotation
andtranslation
.It is also important to notice that access to the linear part of
Transform2D
is important. When we deal with "points", we useTransform2D
. But when we deal with things that are relative, like "distance vectors", or "speed", we use only the linear part. It is just like conversion of temperatures: 0°C corresponds to 32°F, but a difference of 0°C corresponds to a difference of 0°F.Some considerations
About
Vector2D
(andVector3D
) class.Vector2
should be calledVector2D
. I know it "sucks"! But this is the time to make the change!About
Basis2D
(andBasis3D
) class.There should be a
Basis2D
native object type. It represents an invertible linear transform (a 2x2 matrix with non-null determinant).The ultimate usefulness of a
Basis2D
object is to be applied to aVector2D
object:In terms of matrices,
Basis2D(x,y)
is equivalent toThis class could have static methods to provide common transforms. For example,
Basis2D::rotation(theta)
should return aBasis2D
instance corresponding to the matrixThis class would have unary methods:
invert()
,transpose()
,rotate()
,scale()
, etc. It could also have "const" methods to return a copy:inverse() const
,transposed() const
,rotated() const
, etc. Also,determinant() const
.The method
rotate()
must be defined/documented with care. Some might understand two different things aboutT.rotated(angle).apply(v)
:v
and then applyT
to the result.T
and then "rotate" the result.IMHO, if
T
already represents the resultant transformation up to "now" and you want to further rotate things afterwards, then the natural choice is option 2. Therefore,Basis2D R = T.rotated(angle)
should be equivalent toOr, if we define the
operator*
as composition:Basis2D R = Basis2D::rotation(angle) * T
. The definition ofoperator*
should be something likeFor two
Basis2D
instances,F
andG
,F * G
shall result in aBasis2D
such that(F * G).apply(v)
results the same asF.apply(G.appy(v))
. :-)The
operator*
method corresponds to matrix multiplication. But, although I have written "matrix this, matrix that...", I do not think that matrices should be emphasized. In special, I do not think method names and variable names should refer to "lines" and "columns". Nor should they refer to "right multiplication" or "left multiplication". If we choose to representBasis2D
puttingBasis.e1
andBasis.e2
in lines instead of columns, we get exactly the sameBasis2D
"concept", but the roles of "lines/columns" and "left/right" become swapped:Documentation can use matrices to illustrate how
Basis2D
works. ButBasis2D
should be independent of "matrices". It is a linear transform, and therefore, independent of your choice of thinking of vectors as "lines" or "columns". Especially when dealing withTransform2D
andTransform3D
, here is an example on how it can be confusing: #1336.Internally,
Basis2D
can consist of just twoVector2D
variables:ex
andey
. We could have aliases (getters/setters?) so people could refer toex
by other names likex
,e1
ori
; and toey
byy
,e2
orj
.About
Transform2D
(andTransform3D
) structure.Transform2D
should be formed by two variables: oneBasis2D linear_part
and oneVector2D origin
. There could be aliases fororigin
:position
andtranslation
. The variablelinear_part
could be also calledaxis
. Also, there could be aliases forlinear_part.x
andlinear_part.y
: justx
andy
.Operations with
Transform2D
are a bit more complicated and can be very poorly defined if we are not careful. For example, ift
is aTransform2D
instance, what do we mean byvar r = t.translated(Vector2D(1,1))
?r.origin = t.origin + Vector2D(1,1)
?r.origin = t.origin + t.linear_part.apply(Vector2D(1,1))
?From the point of view of the "outside" of
t
, that is, from the point of view of someone manipulatingt
, item 1 makes more sense. Remember thatt.origin
is specified in terms of the outer coordinate system. That is, the canvas coordinate system in our analogy mentioned in the beginning of this document. However, from the point of view oft
itself, that "translation byv
" would make more sense ifv
is a vector specified int
's own coordinates. In this case, item 2 would make sense.local_translation
and onetranslation
(or some other name indicating it is not local)?translation
? If someone wants a non-local translation s/he can simply maket.origin += v
.t.origin
manually?The local version might be a little harmful because
t.linear_part.apply(v)
is recalculated every call without the caller being aware of it. Maybe the caller should calculate it and always use the non-local version. For example, ifv
is the "speed", thena * v
would be the translated amount aftera
units of time. Then, instead of callingt.locally_translated(a*v)
every millisecond and unconsciously computingt.linear_part.apply(a*v)
every time, you could simply uset.translated(a*w)
, wherew = t.linear_part.apply(v)
. I don't know anything about FPUs and GPUs, so I don't know if this would be a non-problem with the help of hardware.Item 4 would not be a bad option if the "dot syntax" was not so useful:
t.translated(v).rotated(theta).translated(v)...
.By the way, what do we mean by
var r = t.rotated(theta)
?When dealing with linear transforms, a rotation is necessarily a rotation about the origin. Otherwise, the result is not a linear transform. But in the context of
Transform2D
(affine transforms), "rotation" can be about any point. There is one very natural point someone might be interested in rotating an object about: the local origin. The local origin can be interpreted as the "position" of the object. If someone says "rotate the object", s/he probably means "rotate the object "without changing its position". This is the easiest to accomplish:It is also very easy to rotate the object about the non-local origin. I don't think "rotation about the non-local origin should be an implemented method. From the point of the object being rotated, it is just an "arbitrary point" of the plane. But it is easy to execute. All you have to do is rotate
origin
as well asex
andey
.One might as well, be interested in rotating the
Transform2D
about some arbitrary pointp
. The first step is to simply rotate locally. That is, first we rotateex
andey
. Then, we just need to rotateorigin
about the pointp
. The easiest is ifp
is in non-local coordinates, because so isorigin
:p
goes to (0,0):origin -= p
.origin += p
.Scaling is completely analogous to rotation! Scaling is not well defined if we do not provide a little more information. For example, we can choose a point to "scale about": everything is stretched, but this point is kept fixed. To stretch locally and not move the object position, one just needs to scale the axis:
To do a non-local scaling about an arbitrary point
p
(in non-local coordinates):Actually, we could even generalize this and apply any linear transform (
Basis2D
) about an arbitrary pointp
(in non-local coordinates):But I do not think we should have such a method.
About
Node2D
(andNode3D
) structure.Unfortunately,
Node2D
's current implementation does not make use ofTransform2D
. ATransform2D _mat
variable is "kept in sync" with other variablespos
,angle
,_scale
andskew
. Instead, everything should be made in terms ofTransform2D
.I would just suggest
Node2D
to be a (protected) subclass ofTransform2D
. If GDScript does not allow multiple inheritance, instead ofClassDB::bind_method
for everyTransform2D
operation, we should simply implement a cast (for usage in GDScript):We could, of course,
ClassDB::bind_method
for very simple e common use cases:scale
androtate
. But more complex operations should be made directly throughTransform2D
.Current source code status.
This is a very difficult topic to write about. I do want to criticize the code and suggest improvements. However, I do not want to make any developer feel uncomfortable in any manner! So, I'd like to start this section apologizing! :-)
Vector2D
andVector3D
Vector2D
andVector3D
.Basis2D
Basis2D
. It should be implemented exactly asBasis3D
.Basis3D
: basis.h and basis.cppBasis
is very well written.Matrices... rows and columns...
The original (and proposed) class name suggest that
Basis3D
is not just a matrix. Its data can be represented by a matrix, and its operations can be translated into matrix language. In general, I do not see any reason to do this. A matrix is a table of numbers over which many operations are defined. The meaning of the table as well as the meaning of those operations depend on what the matrix is being used to represent.Basis3D
is not just a matrix. As the name suggests, it represents some basis for the 3 dimensional vector space we are working it. Those vectors could be calledex
,ey
andez
, for example. But they are calledelements[0]
,elements[1]
andelements[2]
. So far, so good!Basis3D
also represents a linear transform, in a very simple way. This linear transform converts the local coordinates to canonical coordinates:There is no "left"/"right", no "row"/"column" in this.
Matrices, however, when you "draw" them as a table, do have "rows" and "columns". Vectors represented as matrices can be in the shape of "row matrices" or "column matrices". If we use "column matrices", then conversion from local
(x,y,z)
coordinates can be calculated like this:with
(x,y,z)
,ex
,ey
andez
represented by "columns". If however, we choose "row matrices", we get:We can think of matrices as a box that translates from "local" to "global" coordinates. When we use "columns", the "local side" of the matrix is the "right side". If you operate on the matrix from the "global point of view", you operate on the "left". If you want to operate "locally", you operate on the "right".
But this is when you choose to represent vectors as "columns". It happens that people usually like to consider vectors as "rows". But at the same time, they like to treat the matrix as if "from the right" means "local" and "from the left" means "global". This is not consistent!
Currently
Basis
constructor gets 3 vectors passed to it. I was supposing they were the "basis vectors", as the class name suggests. They are assigned toelement[n]
and treated like "rows" of the matrix, because the matrix has entrieselement[n][m]
, and people like to say thatn
is the "row" andm
is the "column". So, vectors are rows, right?The code, however, treats "local" things as done "from the right" and global as done "from the left". That is, vectors are columns??? This is a little problematic, and this fact can be checked at
Basis::get_scale_abs()
definition:What is being calculated here, is the length of the columns of the matrix!!! With the notation of
ex
,ey
andez
, the code would be:Another example, is the comment on the beginning of
Transform2D
's constructor:I hope this convinces anyone of the harm that those unneeded "matrices", "rows", "columns", "lefts" and "rights" may cause. Since
Basis3D
andBasis2D
are not part of a multidimensional matrix library, my suggestion is to simply avoid double indexes and simply usingex
,ey
andez
. I am not saying, of course, that the matrix row is not important! The "rows", that is vectors likeVector3D(ex.x, ey.x, ez.x)
have important meaning, especially when you are looking "from the inside to the outside". But those too ought to be called by a name that is meaningful in its context, not "row"! For example, the first "row" is the gradient (in local coordinates) of the x "global coordinate". So, in this context, instead of calling it "row", it could be called "x-gradient".Roadmap.
Basis3D
.MATH_CHECKS
) during construction. (IMO)ex
,ey
andez
, just likeTransform2D
usesx
andy
.transform.h and transform.cpp
Transform3D
.inverse()
has to invert always. See: godotengine/godot#48712.Transform3D::basis
. (Instead of implementingscale_basis()
, for example.)transform_2d.h and transform_2d.cpp
Transform2D
has to be implemented (almost) exactly likeTransform3D
node_2d.h and node_2d.cpp
Node2D
is basically aCanvasItem
with coordinates. It is aTransform2D
that can be put in a canvas. Maybe,Node2D
should subclass both:CanvasItem
andTransform2D
. Everything you might want to do with aTransform2D
, you want to do with aNode2D
: rotate, translate, scale, etc.Transform2D
.angle
,_scale
,skew
, etc.get_transform()
cast as explained above (somewhere).rotate
,translate
andscale
functions for the ease of use, but eliminate the rest. If the user wants more complex manipulations, s/he should callget_transform()
.Node3D
.Do by analogy! :-)