As it is, examples of the rotation transform don't work properly because of the normalising the canvas to (0, 0) from the API.
The rotation operation still considers the origin to be (250, 250) (with default canvas dimensions of 500, 500), and as a result the returned shapes often fly out of canvas.
We should fix it.
Moreover, I think the current code is not very easily understandable to me. I suggest refactoring it to use a conversion from cartesian coordinates to polar coordinates, then rotate, then convert back to cartesian coordinates. This would be simpler to understand. Alternatively, adding some comments might also be helpful.
As it is, examples of the rotation transform don't work properly because of the normalising the canvas to (0, 0) from the API.
The rotation operation still considers the origin to be
(250, 250)
(with default canvas dimensions of 500, 500), and as a result the returned shapes often fly out of canvas.We should fix it.
Moreover, I think the current code is not very easily understandable to me. I suggest refactoring it to use a conversion from cartesian coordinates to polar coordinates, then rotate, then convert back to cartesian coordinates. This would be simpler to understand. Alternatively, adding some comments might also be helpful.