Open raffaeler opened 4 years ago
So, when you go from zero to one pointers, and move the pointer, only tx = x - initial_x
and ty = y - initial_y
change, and when you change from one to zero or two active pointers, then only the translation needs to be composed into the accumulated matrix.
When you have two active pointers, imagine a line connecting the two points when you enter the two active pointers state, lets consider the origin (ox, oy) of the gesture, i.e. the two points p1 = (x1, y1) and p2 = (x2, y2), the midpoint of them
const ox = (x1 + x2) / 2;
const oy = (y1 + y2) / 2;
const initial_radians = Math.atan2(y1 - y2, x1 - x2);
const initial_distance = calcDistance(x1, y1, x2, y2);
function calcDistance(x1, y1, x2, y2) {
const dx = x1 - x2;
const dy = y1 - y2;
return Math.sqrt(dx * dx + dy * dy);
}
Then ox, oy, initial_radians and initial_distance are constant as long as the number of active pointers doesn't change
When either moves, you have
const tx = (x1 + x2) / 2 - ox;
const ty = (y1 + y2) / 2 - oy;
const scale = calcDistance(x1, y1, x2, y2) / initial_distance;
const radians = Math.atan2(y1 - y2, x1 - x2) - initial_radians;
And when you go from two to either one or zero active pointers, you can accumulate all transforms into a single matrix.
Oh, browser hadn't updated with latest comment when I replied. Might be that setNativeProps might help you a bit there, or some combination of reanimated and gesture-handler, but going completely tailor-made native will certainly be the way to resolve it optimally.
And in case this helps someone now or in the future, to calculate the accumulatedMatrix, convert the primitive transforms into matrix representation, and multiply them together, e.g. you have a chain with more than one matrix AB.. then just use multiply_matrices(A: Matrix, B: Matrix): Matrix or something similar, to make two matrices into one, until you only have one left, e.g.
const accumulatedMatrix = [A, B, C].reduce(multiply_matrices)
The order you do the composition / multiplications if you have more than two doesn't matter, e.g. ((AB)C)=(A(BC)), i.e., it's associative => reduceLeft = reduceRight and thus straightforward to compute individual compositions in parallel. But the order of e.g. translate T and rotate R matters TR != RT, i.e. it's noncommutative (order of operations matter)
In this specific case I guess it makes most sense to use the api provided by react-native itself, i.e.
createIdentityMatrix: function() https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L20-L22
createTranslate2d: function(x, y) https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L84-L88
createScale: function(factor) https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L101-L105
createRotateZ: function(radians) https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L156-L160
multiplyInto: function(out, a, b) https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L170-L223
@wcandillon :) Now you can understand my huge surprise when I discovered that react-native-gesture-handler and reanimated do not support matrices. I remember the Foley Van Dam book as one of the most important I ever read in my life. Graphics is all about matrices calculations.
As @msand well synthetized, the important thing is preserving the order. You start from the Identity as your state. During the gesture, you just build the B matrix with all the transformations coming from rotation, scale and translate, where rotation and scale also involve two translation each representing respectively the center of rotation and scale. When the gesture finishes, you just multiply the state with B and obtain the new state while B is reset, of course, to the identity. Luckily for us, as @msand wrote, you can keep state and B separate:
transform: [
{ matrix: [1.5, 0, 0, 0, 0, 1.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] },
{ matrix: [.5, 0, 0, 0, 0, .5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] }
]
This means you never have to multiply matrices together, but just keep the first as the previous state and the second one as the current ongoing gesture.
I am doing slow progresses both because I am also working on another project and because I started with react-native only last week and since I need to work with Typescript, I have some additional issue. For example I spent some time in understanding how to map the MatrixMath.js in my typescript project, but now it works very nicely :)
Will keep you updated!
Actually, if you use the midpoint of the two initial points when two pointers become active, you can use the same origin for both scale and rotation, as it'll be on the line connecting the points, it'll rotate correctly, and because it's in the middle, it'll scale the distance between the points correctly / evenly.
Actually, other way around, because it's on the line, scale is correct, because it's in the middle, rotation is correct.
Managed to confuse myself now even ;) Either way, midpoint should work as origin, when you place two pointers on a surface, the point in the middle should stay in the middle, even if you move the two points around.
A slightly more efficient (by half) approach is possible with large number of serial matrix multiplications, doing O(2^((log2 N) - 1)) instead of O(N - 1) operations, thanks to being associative, by reducing the amount of work by half for each high level step, i.e. by taking every pair of remaining compositions (modulo odd last one gets left unchanged) and reducing each pair to a single element each.
In this specific case, it's also possible to use the fact that affine 2d transforms only require six numbers, and that e.g. the origin offset and translation are additive, to reduce the number of atomic floating point unit operations that are needed to compute the accumulated matrix. Write out the actual algebraic expression for the computation, refactor out any reused parts, simplify if possible, and do the math without a single branch / jump operation, to maximise performance.
@msand Agree, I was aslo thinking to a dedicated library reducing the amount of sums/muls.
Do you have any idea on how to preserve the smoothing provided by Animated.event()
when working with matrices?
I'd assume its just the flow of gesture events that makes it smooth, unless you're combining it with Animated.{decay, timing, spring, Easing} or with a ScrollView, in which case it's the decelerationrate / decay coefficient https://reactnative.dev/docs/scrollview#decelerationrate https://reactnative.dev/docs/animated#decay
analytical spring model based on damped harmonic oscillation https://reactnative.dev/docs/animated#spring
Easing + timing interpolation https://reactnative.dev/docs/easing https://reactnative.dev/docs/animated#timing
I'm certainly too overworked to think clearly, the structure of computing a binary tree of pairs only decreases the time required if there's more than one processor (modulo communication / sync overhead), emulating the parallel computation serially requires more operations than just doing reduction straight.
Making some tests, I am thinking that the best strategy (before going native) is:
I've built an example using setState()
which works well (not super clean, just as an experiment):
cond(eq(state, State.END), [
call(
[pinch.x, pinch.y, origin.x, origin.y, scale],
([pinchX, pinchY, originX, originY, scale]) => {
setTransform([
...transform,
{ translateX: pinchX },
{ translateY: pinchY },
{ translateX: originX },
{ translateY: originY },
{ scale },
{ translateX: -originX },
{ translateY: -originY }
]);
}
)
])
// ...
<Animated.Image
style={[
styles.image,
{
transform: [
...transform,
...translate(pinch),
...transformOrigin(origin, {
scale
})
]
}
]}
source={require("./assets/zurich.jpg")}
/>
Here potential optimizations would be to use an accumulated matrix instead of recalculating the matrix everytime and use setNativeProps() maybe. While this would work quite well in practice (right?), I'm drawn by the challenge to do this only on the UI thread.
The React MatrixMath points to this pseudo algorithm: https://www.w3.org/TR/css-transforms-1/#decomposing-a-2d-matrix. While reanimated doesn't work with matrices, it would work with the decomposed form, so we could build the functions in reanimated to calculate the matrix and decompose them (might be lots of work). I'm also wondering if there are some shortcuts we could take since we are trying to do this for a specific transformation, we are not necessarily trying to solve the general case.
I think easiest might be to fork both reanimated and react-native-gesture-handler and either add the matrix support, or make a quick proof of concept of a tailor made api for this specific use case. I'm too busy with work atm to put much effort into it right now, but would be a fun thing to explore, and might do it to relax from work at some point.
An alternative api would be to add something declarative to reanimated for flattenOffset / accumulating transform matrices, the same thing applies there, take the list of current transforms, compose them, swap with the current accumulated one and set transforms / animated values to identity. Probably much easier to implement that in native logic, than implementing the matrix multiplication and decomposition logic using the reanimated syntax as is.
"might do it to relax from work at some point." π€£
I agree, happy that we are on the same page. This is a great use case for the upcoming improvements in reanimated for instance. And for now, the tailor-made solution is definitely a fun puzzle (that might not be that hard actually).
Thank you for your support and I will keep you posted. These things are hard to leave at rest ;-)
Yeah, it can be quite stimulating to think about π Nice to change focus of attention for awhile, with making something relatively well-specified, and useful pattern easier to achieve, with a reasonably short time to finish. Btw, seems reanimated might support matrices? https://github.com/software-mansion/react-native-reanimated/pull/110#issuecomment-426084810
@raffaeler I suspect it's best not to do any extra animation in the gestures, and get to the final resting / rendered output asap. But when you search / want to show a location, some kind of fly to algorithm probably makes sense: https://github.com/mapbox/mapbox-gl-js/search?q=flyto&unscoped_q=flyto
// This method implements an βoptimal pathβ animation, as detailed in:
//
// Van Wijk, Jarke J.; Nuij, Wim A. A. βSmooth and efficient zooming and panning.β INFOVIS
// β03. pp. 15β22. <https://www.win.tue.nl/~vanwijk/zoompan.pdf#page=5>.
* @param {number} [options.curve=1.42] The zooming "curve" that will occur along the
* flight path. A high value maximizes zooming for an exaggerated animation, while a low
* value minimizes zooming for an effect closer to {@link Map#easeTo}. 1.42 is the average
* value selected by participants in the user study discussed in
* [van Wijk (2003)](https://www.win.tue.nl/~vanwijk/zoompan.pdf). A value of
* `Math.pow(6, 0.25)` would be equivalent to the root mean squared average velocity. A
* value of 1 would produce a circular motion.
I came up with another way to think about the transformation, essentially it's enough to define how much the initial coordinate system has been rotated, scaled and then translated. Because the two dimensions aren't scaled independently, and no skew is applied, it's enough to define how a single unit vector has been transformed, e.g. e unit vector from the origin (0, 0) to 1 unit in the x axis (1, 0).
I thinked about it, but the problem with reanimated is that you can't read the values at the end of the gesture. They are totally opaque (the __value._val can be read only in debug mode). So, once you finish the first gesture, it is the end.
@msand @wcandillon
I finally did it ... the code is still a bit dirty, but at least the behavior is correct and the perf with a simple Svg are absolutely good ...I have to see what happens with complex drawings.
I ended up in keeping separate the transforms. BTW beforeCenter
and afterCenter
have equal values but opposite signs. On the gesture release, I calculate the matrixState
and reset the other values to default.
This way, the gestures can be accumulated.
{ translateX: this.state.beforeCenter.x },
{ translateY: this.state.beforeCenter.y },
{ rotate: this.state.rotate },
{ scale: this.state.scale },
{ translateX: this.state.afterCenter.x },
{ translateY: this.state.afterCenter.y },
{ translateX: this.state.pan.x },
{ translateY: this.state.pan.y },
{ matrix: this.state.matrixState },
I have to keep those values twice: once as Animated
and the others as raw number
s because I need to read the values to calculate the matrix at the end of the gesture:
interface GestureViewState {
beforeCenter: Animated.ValueXY;
rotate: Animated.Value;
scale: Animated.Value;
afterCenter: Animated.ValueXY;
pan: Animated.ValueXY;
valueBeforeCenter: IPoint,
valueRotate: number,
valueScale: number,
valueAfterCenter: IPoint,
valuePan: IPoint,
isPanOnly: boolean;
initialPoint?: IPoint;
initialPinch?: IPinch;
matrixState: number[];
Finally, I had to use a ref (that is squiggling in red in the editor, damn) to retrieve the view size using measure, otherwise everything is slightly out of offset. I don't know if there is a better way to have the size of the current client area, but relying on the window size is definitely wrong.
This test is still raw as I don't have enforced any constraint and still have to decide:
@raffaeler I tried a similar approach and it works well indeed. Now I am trying to stay on the UI thread. I'm struggling with the matrix calculation. Considering the following transformation:
transform: ([
{ translateX: px },
{ translateY: py },
{ translateX: ox },
{ translateY: oy },
{ scale: s },
{ translateX: -ox },
{ translateY: -oy }
])
I'm excepting the matrix transformation via processTransform()
to be:
Result | |||
---|---|---|---|
s | 0 | 0 | 0 |
0 | s | 0 | 0 |
0 | 0 | 1 | 0 |
(px + ox) * s - ox | (py + oy) * s - oy | 0 | 1 |
The correct result seems to be (which also intuitively makes sense but I'm not able to get there via the matrix multiplication):
Result | |||
---|---|---|---|
s | 0 | 0 | 0 |
0 | s | 0 | 0 |
0 | 0 | 1 | 0 |
px + ox - ox * s | py + oy - oy * s | 0 | 1 |
But this is not the result I'm getting. I am probably making a trivial mistake here right?
After that, decomposition gives us: translate(px + ox - ox * s, py + oy - oy * s)
and scale(s)
. Which you can input as
[
{ translate: [px + ox - ox * s, py + oy - oy * s] },
{ scale: s }
]
What confuses me about the last part is that the order of translate
and scale
matters but it is not specified by the decomposition algorithm.
There are a few things there:
My (still not optimized) version of the transformations are:
var temp = MatrixMath.createIdentityMatrix();
var a1 = MatrixMath.createTranslate2d(this.state.valueBeforeCenter.x, this.state.valueBeforeCenter.y);
var a2 = MatrixMath.createRotateZ(this.state.valueRotate);
var a3 = MatrixMath.createScale(this.state.valueScale);
var a4 = MatrixMath.createTranslate2d(
this.state.valueAfterCenter.x + state.dx,
this.state.valueAfterCenter.y + state.dy);
MatrixMath.multiplyInto(temp, temp, a1);
MatrixMath.multiplyInto(temp, temp, a2);
MatrixMath.multiplyInto(temp, temp, a3);
MatrixMath.multiplyInto(temp, temp, a4);
MatrixMath.multiplyInto(temp, temp, this.state.matrixState);
where:
valueBeforeCenter
is the positive transform (which include o
and p
in your case)valueRotate
and valueScale
are of couse the rotation in radians and scalingvalueAfterCenter
is exactly the same of valueBeforeCenter
but with opposite signdx
and dy
are the final amount of the pan operationmatrixState
is the previous state of my matrix which will be overwritten by temp.HTH
@raffaeler the transform I wrote down correspond to the exact gesture/animation I am trying to achieve. I am surprised to see that the order of the transformations in your example is reversed with mine. π€Other than that, everything else looks identical.
Regardless, My goal is to try to go back to an translateOffset
and scaleOffset
values when releasing the gesture. So I first, I wanted to calculate the matrix by hand to make sure that I have some sort of a grip on what is going on and work my way back. However, I am not able to get to the same result by hand, is there any chance you could point me to the mistake I'm making when multiplying the matrices manually?
The order is important, you have to think reverse because conceptually you move the axis origin, not your drawing.
The result you posted has numbers in the last row, but they should be in the last column instead. Try putting the symbols here (put parenthesis as well) and look at the result.
Yeah, at least I'd be used to having the translations in the last column as well, referring to a constant unit vector in a direction orthogonal to the other two or three ones. The decomposition seems to make sense otherwise, e.g.
([
{ translateX: px },
{ translateY: py },
{ translateX: ox },
{ translateY: oy },
{ scale: s },
{ translateX: -ox },
{ translateY: -oy }
])
= Px Py Ox Oy S Ox^-1 Oy^-1
= P O S O^-1
= T S O^-1
ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ
β 1 0 (px + ox) β β s 0 0 β β 1 0 -ox β β 1 0 (px + ox) β β s 0 (-ox * s) β
β 0 1 (py + oy) β * β 0 s 0 β * β 0 1 -oy β = β 0 1 (py + oy) β * β 0 s (-oy * s) β
β 0 0 1 β β 0 0 1 β β 0 0 1 β β 0 0 1 β β 0 0 1 β
ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ
ββ ββ
β s 0 (px + ox -ox * s) β
= β 0 s (py + oy -oy * s) β
β 0 0 1 β
ββ ββ
Think of it this way, if you first apply translation and then scaling i.e. ST, then the offset gets scaled by that amount, if you first scale and then translate, i.e. TS, then you scale the space, and only then offset. Only difference is a scaling of the translation, in the case of a single pair of these two primitives.
In this specific case, to scale about some pinch center point, you first need to move that point to the origin, i.e. -ox and -oy = O^-1, then scale about that origin, and then move that origin such that it is in the position where it was on screen before the initial translation, by adding the offsets ox and oy = O
And with regards to what to do when the number of active pointers change. Consider it equivalent to the gesture completely ending, and a completely new one starting. Accumulate the state, and set the diff / delta to identity.
@msand π― and the proof of concept with using setState/setNativeProps for the accumulated matrix works well. Now I'm trying to build something that doesn't involve the JS thread.
The transformation is:
{
transform: [
// accumulated transformation
...translate(offset),
{ scale: scaleOffset) },
// transformation done by the gesture
...translate(tr),
{ scale }
]
}
First time you move the gesture, everything works beautifully, focus, translation, scaling. Now I'm trying to set offset
to the correct value when the gesture ends.
cond(eq(state, State.END), [
// store offset
vec.set(offset, vec.add(offset, /* ...? */)),
set(scaleOffset, multiply(scaleOffset, scale)),
// reset values
set(scale, 1),
vec.set(tr, 0),
])
I have a few questions based on your comment.
O^-1
notation what does it mean?processTransform
implementation. Even though this is clearly what happens based on the results given by this function.translate
and scale
. How to do you know in which order would you need to apply these transformations? O^-1 is just ascii notation for exponentiation "^" of the matrix O using the scalar / real number "-1" as the exponent, and corresponds to the inverse element / inverted matrix such that I = O^1 * O^-1 = O^(1-1) = O^(0) = I, where I is the identity matrix
As matrix multiplication is associative, it doesn't matter what order you do it, I did TSO = TM = F, where M is the intermediate matrix i wrote out and F is the final, but can also do TSO = NO = F, and it'll give the same result, ((AB)C) === (A(BC))
It corresponds to a single matrix, and the matrix F gets multiplied with the vectors v such that the resulting vector v' = Fv so you can probably assume from this that the translation is applied independently from scaling / scaling has already been accounted for
ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ
β s 0 (px + ox -ox * s) β β x β β s 0 tx β β x β β x'β β s*x + tx β
β 0 s (py + oy -oy * s) β β y β = β 0 s ty β β y β = β y'β = β s*y + ty β
β 0 0 1 β β 1 β β 0 0 1 β β 1 β β 1 β β 1 β
ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ ββ
To clarify, that it's associative only means you don't have to write out parenthesis to write an unambiguous statement/expression/equation in the language of matrix multiplications. This is true for multiplication in the division algebras as well, except octonion, i.e. real, complex, and quaternion. Real and complex algebras commute, but quaternions loose that property similarly to matrices, and octonions aren't even associative, sedenions aren't even a division algebra, so can't even talk about an inverse / negative exponent. Quaternions are a good fit for 3d transforms, i.e. rotation, translation, scaling (and usable for 2d as well).
Also, it's fully possible that processTransform produces column-major 1d array representations of the matrices, in this case the way you wrote it out makes sense, just transposed https://en.wikipedia.org/wiki/Row-_and_column-major_order
And btw, if you don't consider change of number of active pointers as end/start of gesture, you get the issue i still haven't fixed in https://iws.nu/ http://infinitewhiteboard.com/ Try pinching and then releasing one of the fingers (either the first or the second one you put one the screen), and move around, and it'll feel awkward for sure ;)
Thank you @msand β₯οΈ
Yes I noticed that with gesture handler, you definitely to check of the number of active pointer. I'm getting close with the tailor made solution: https://www.dropbox.com/s/hdlb2mefk988dc5/t1.mp4?dl=0, the math is still not 100% correct as there are a lot of moving pieces and I need to check my code (this is why the accumulated matrix is so nice for such scenario)
And transposing an expression switches the order of operations, i.e. property 3 https://en.wikipedia.org/wiki/Transpose#Properties
@wcandillon @msand This is my optimized function in typescript to create a matrix that includes all the possible transformations done at once:
// Creates a matrix equivalent to the multiplication of the following matrices:
// - translating the axis in the center of the the scale/rotation
// - rotate is in radians
// - scale (multiplier, therefore 1 does not scale)
// - translating back the axis to the origin (opposite sign of the initial translation)
// - final translation (pan occurring when dragging both fingers while rotating and/or scaling
// This matrix need to be multiplied for the previous state when accumulating gestures over time.
// For example:
// var temp = this.createRotateScaleMatrix(this.state.valueRotate, this.state.valueScale,
// this.state.valueAfterCenter, { x: state.dx, y: state.dy});
// MatrixMath.multiplyInto(temp, temp, this.state.matrixState);
//
// Equivalent code, computed using the MatrixMath support available in React Native:
// var temp = MatrixMath.createTranslate2d(center.x, center.y);
// var a2 = MatrixMath.createRotateZ(this.state.valueRotate);
// var a3 = MatrixMath.createScale(this.state.valueScale);
// var a4 = MatrixMath.createTranslate2d(-center.x + state.dx, -center.y + state.dy);
// MatrixMath.multiplyInto(temp, temp, a2);
// MatrixMath.multiplyInto(temp, temp, a3);
// MatrixMath.multiplyInto(temp, temp, a4);
// MatrixMath.multiplyInto(temp, temp, this.state.matrixState);
createRotateScaleMatrix(rotate: number, scale: number, center: IPoint, finalPan: IPoint) : number[] {
var beforeX = -center.x;
var beforeY = -center.y;
var afterX = center.x;
var afterY = center.y
var cost = Math.cos(rotate);
var sint = Math.sin(rotate);
// The matrix goes by column, as expected by React Native
var temp : number[] = new Array(16);
temp[0] = cost*scale;
temp[1] = sint*scale;
temp[2] = 0;
temp[3] = 0;
temp[4] = -sint*scale
temp[5] = cost*scale;
temp[6] = 0;
temp[7] = 0;
temp[8] = 0;
temp[9] = 0;
temp[10] = 1;
temp[11] = 0;
temp[12] = beforeX + cost*scale*(afterX+finalPan.x) - sint*scale*(afterY+finalPan.y);
temp[13] = beforeY + sint*scale*(afterX+finalPan.x) + cost*scale*(afterY+finalPan.y);
temp[14] = 0;
temp[15] = 1;
return temp;
}
Nice ππ» Meanwhile I've built a tailor made solution that doesn't run JS calls when the gesture ends: https://gist.github.com/wcandillon/6d1367528771ecd5257f5de655387c10 It seems to be working pretty, I'd love to have your feedback on that.
I didn't test it, but it looks very neat :) BTW, you told that react-native-handler allow single matrix parameters to be Animated Values, right? So I could migrate my sample to use react-native-handler as well
Do you know guys how to obtain the client size of a View/control withoutusing findNodeHandle and then measure
? Is there a better way?
@raffaeler As far as I investigated, it is not possible. It looks like there is a PR opened for it but it wasn't merged. However, they are currently working hard on the next version of reanimated and the example you have built is a great use-case to motivate support of matrices in the next version.
These limitations (both in React Native and in Reanimated) really surprise me because they were always available in all the other UI technologies I ever used. I am currently using findNodeHandle to find the exact coordinates of the center of the pinch. Neither window or screen can be used because they are out of offset (depending on the device of course). Thank you anyway
@raffaeler As far as I know, they are working on an exciting new version. And indeed this seems to be the standard approach in other systems (Flutter for instance).
In my example, you can see how I adjust for the origin of the pinch, the default origin (when you zooming from the middle of the view) is simply add(CENTER, offset).
const defaultOrigin = vec.add(CENTER, offset);
const adjustedFocal = vec.sub(focal, defaultOrigin);
For simpler transformations, I do find the transform API from React Native much simpler and elegant than other commonly found transform APIs (just a matter of taste I guess).
Well, we will see... in certain cases you have to deal with the basic primitives for various reasons. The different coordinate system in the Svg may require it in certain cases.
BTW, another thing to support is device rotation. This implies to multiply the state matrix in order to invert the coordinates. I am working on it.
The matrix for the device orientation implies to:
How can I distinguish the direction of the rotation in React Native? @msand any idea?
Thanks to your tremendous support I was able to get this example out today: https://t.co/QPdJrqmZua?amp=1
@raffaeler I'd love your feedback on this to make sure I didn't overlook anything.
Thank you guys β₯οΈ
Cool, but I would have underlined the power of OSS and collaboration derived from this thread. When I talk in conferences, I often underline this awesome fact, because I want more people to participate in communities and in sharing code solutions. And as a community leader, I often stress my local community on this.
I always do but this is not the final content I am working on. I wanted to get this video out as an intermediary step. Therefore this didn't come up yet.
@raffaeler I am now thinking that we could provide a utility function that corresponds to the original request you have made. Something that would look like:
const {translation, scale, rotate} = getAccumulatedTransform([
...translate(pinch),
...transformOrigin(origin, { scale })
]);
//...
vec.set(translationOffset, translation)
vec.set(scaleOffset, scale)
vec.set(rotateOffset, rotate)
What getAccumulatedTransform()
does is the matrix multiplication/decomposition but done in Reanimated. That way, I would have to do this calculation manually like I did in the video.
What do you think?
@wcandillon If I understand well your proposal, the accumulated transform should include the "translate back" and the optional additional pan (translate) at the end of three transforms you already mentioned. The code I posted in a comment above already computes it and it looks to me that the advantage of moving it to java (or objC) is tiny.
I am not familiar with the underlying engine of react native, but for example V8 compiles javascript to native assembler. When it comes to an easy algorithm based on floating points, this compilation is pretty efficient while other types of algorithms may suffer a lot.
The loss of performance during animations in standard react-native is mostly due to the transitions (posting json messages) to the native side and back rather than a bunch of sum and mul.
Going back to my initial question, the point was how to specify a matrix, not how to calculate it (sorry if I was not clear). It would be sufficient, IMO, that reanimated supported a type like ValueNumberArray allowing the entire matrix to be passed to the java native engine to be used as a transform.
Please let me know if I was not clear enough
Since the computed matrix is used to represent the state of the previous gestures, it needs to be modified (i.e. communicated to java) only at the end of a gesture. When a new gesture begins, the matrix is constant while the new parameters deriving from the current gesture are separately binded to reanimated (as it already works now).
Another useful thing in reanimated would be the ability to read the Values. This is needed because at the end of a gesture you need to compute the new matrix starting from the single reanimated Values representing translations, rotations and scaling. The same problem occurs in plain react-native right now. In fact in my example I keep and update both the "Value" and plain numbers: the first ones are used in the JSX code while the numbers are used to compute the matrix at the end of the gesture.
I now have two new problems.
I start panning the view (either a picture or an Svg) on the right. Now I can make other transformations and it works.
But if I click on a portion of the screen that was outside the initial view, the gestures are totally ignored. This is true even when the initial drawing (Svg) was larger than the view and during the second gesture I click on the portion of the Svg that initially was not visible. @msand how can I continue to have gestures as the map was "infinite" (aka google maps like)?
When rotating the device, the transformations are already made by the react-native engine. If I sequentially rotate the view, translate it, then rotate the device, at this point any further gesture has the incorrect center point. @wcandillon do you see this problem in your code?
Thank you
for 1., I always have as a children of the gesture handler a view that is never moving (absoluteFill)
On Wed 15 Apr 2020 at 17:07, Raf (Raffaele Rialdi) notifications@github.com wrote:
I now have two new problems.
I start panning the view (either a picture or an Svg) on the right. Now I can make other transformations and it works. 1
But if I click on a portion of the screen that was outside the initial view, the gestures are totally ignored. This is true even when the initial drawing (Svg) was larger than the view and during the second gesture I click on the portion of the Svg that initially was not visible. @msand https://github.com/msand how can I continue to have gestures as the map was "infinite" (aka google maps like)? 2
When rotating the device, the transformations are already made by the react-native engine. If I sequentially rotate the view, translate it, then rotate the device, at this point any further gesture has the incorrect center point. @wcandillon https://github.com/wcandillon do you see this problem in your code?
Thank you
β You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/react-native-community/react-native-svg/issues/1342#issuecomment-614096294, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACKXVQUCC62EO63FB3OUELRMXEUNANCNFSM4ME5JAUA .
Thank you @wcandillon
for 1., I always have as a children of the gesture handler a view that is never moving (absoluteFill)
I still am not that familiar with react-native, but I understood what you mean, sounds perfect.
- would be an issue in my use case as well. I would create a vector for the screen dimension, listen to the dimension change and set the new values from the JS thread .setValue().
I currently am listening to DeviceEventEmitter.addListener('namedOrientationDidChange', ...
to receive the orientation changes.
Then I printed on the console the view using UIManager.measure
as well as the window and screen size retrieved using Dimensions
. Apparently none of them are of help.
Initially I thought the responsible was the header bar on the screen but it is always present and same size both in portrait and landscape mode.
Since the device already reposition the origin to the upper left, there is no need to do it manually. But there is still a small offset that I don't understand.
Question
I have to draw a custom map supporting translations, rotations and zooming. The transforms of every gesture is always additive in respect of the previous gestures. From a math perspective this simply means multiplying the new transform and proceed to the next.
The transform style in React Native apparently does not support matrices (and this is very surprising to me). How can I work with matrices in react-native-svg in order to accumulate the transformations?
Thank you!