Closed rreusser closed 8 years ago
I like the idea of getting rid of two callbacks and replacing them with one, containing all meaningful information. That would allow for #1 in a graceful way. Also that would allow for handling interactions in single place.
Gotta think through the convention of properties though. Mb use dz
for zoom (like delta-zoom)? I don’t imagine cases where real dz
, dsx, dsy, dsz
are used.
The use-case in my head as I was going through this was a google earth sort of interface. That means you basically need all the information about the interaction (rotation angle, change in rotation, translation, distance between touches, etc), but it seems to make some sense to decouple what you actually do with that information. For example, the pinch gesture zooms, unless the fingers are horizontal are dragged vertically. Then they tilt. Then on desktop, you check for keys down and perhaps interpret a drag as a tilt. That's why the mouse wheel actually seemed like more of a pan to me. Interpreting that as a zoom seems like a separate step.
Yeah, honestly, I wasn't immediately able to figure out what deltaZ
is. My best guess was maybe some exotic input device they didn't want to rule out? See: https://w3c.github.io/uievents/#interface-wheelevent
The modules you've listed and this module too are outstanding!—I've just always remained 20% unsatisfied that they mostly prescribe a particular form of interaction-view coupling.
Mb something like this?
{
//mouse, touch, keyboard
type,
//center coords
x, y,
//drag deltas
dx, dy,
//wheel delta (as far as spec leaves that for implementors)
dz,
//rotation and delta (is there more natural name than theta?)
theta, dtheta,
//original event
event: Event
}
Potential drawback is for users with trackballs/2 wheels, where x scroll is ignored. Oh I see your point. We need scale deltas for x and y, that is not only wheel. Though mb still make it opinionated solution? Are the cases where horizontal pinch/x-wheel realistic?
:+1:
I like it! I can also think of:
// where the interaction began (start of drag/pinch/pan/etc):
x0, y0
// distance of fingers in pinch gesture? (apple maps, for example,
// only tilts if your fingers are *close together* -- which I think is weird,
// but I guess should be possible
dist
// initial distance of fingers in pinch gesture
dist0
// relative scale change as a result of pinch gesture
zoom
Maybe storing the initial values should be accomplished in userland when the interaction starts, but it's a common enough pattern that it seemed maybe includable. But I have no strong preference.
As for theta
, maybe just rotation
and drotation
? That's not un-weird either.
Long story short, this was just an idea thought I'd bring up since you were working hard on this. It's one place where I've never found a module that solves the difficult part in a general enough way that I don't often have to back up and do it over myself, shabbily. Feel free to use or discard any of the thoughts!
Ok, I have some use-cases coming, I will try second take on it. Appreciate your feedback!
Cool!
Gosh your example looks great… so smooth… 😄
Ok, since this package is called pan-zoom, not pan-zoom-rotate, and also rotating is mostly for touch-enabled devices, I think rotation is better left to touch-pinch package, it is pretty trivial to implement it with that. In @2.0.0 I reduced API significantly, less code, less calculations, more opinionated UX. Thanks for the incentive!
Looks outstanding! Yeah, it didn't feel great to cram touch handling into the same code, so maybe the answer is indeed a separate, analogous package for that.
Demo looks great! Is it possible or reasonable to add rotation into the mix? My code was kinda junky, but one thought I had was to normalize the output of all events (touch, pan, zoom, pinch) along with information about the sort of interaction that caused it. I didn't have time to write it nicely so my code is junk, but here's the info I passed back: https://github.com/rreusser/interaction-events/blob/master/index.js#L247-L259
For example, with annotations:
So for example, mouse wheel returns a transformation as a pan, but with information that it was a mouse wheel event so that you can interpret it as a zoom. Performing the transformation itself is then left to the user. For transforming the viewport, I used something like this matrix. Inertia would also then be a matter of decaying the transformation values exponentially towards the identity transformation.
Anyway, just thoughts. Your demo looks great (love the inertia!). My example was the best I could come up with, so I thought I'd share maybe what came of it in my head, at least 😄