w3c / pointerevents

Pointer Events
https://w3c.github.io/pointerevents/
Other
68 stars 34 forks source link

Pointermove should not require a hit-test by default for touch #8

Closed RByers closed 8 years ago

RByers commented 9 years ago

From https://lists.w3.org/Archives/Public/public-pointer-events/2015JanMar/0041.html:

One of the main concerns I cited with our decision not to implement is that the uncaptured-by-default model for touch input places a performance burden on the engine. Since the dominant input APIs on touch devices (Touch Events, Android and iOS native APIs) don’t have this property, we are unwilling to adopt an API with this disadvantage, no matter how small. In addition to the performance concerns, we believe an implicit-capture model encourages a style of UX we feel is most appropriate for direct-manipulation input devices - where, by default, the user is manipulating a specific object, not touching a layer of glass above all objects.

I feel the simplest way to address these concerns would be to make the following two changes to the API. First, allow an implementation to implicitly capture touch (and perhaps pen) input to the target node on pointerdown (note that the spec already technically permits the browser to choose to capture implicitly since IE does this for button elements for example). Developers can still use the explicit capture APIs to recapture elsewhere (as is often necessary to achieve the desired UX), or return to uncaptured events. Second, add an optional parameter (false if unspecified) to setPointerCapture which indicates whether pointerover/out/enter/leave events are desired during capture. When false, an implementation can avoid hid-testing for every move during capture.

Of course these are breaking changes, and it’s likely they could be a significant compatibility problem for existing websites. I’m interested in implementing this model behind a flag and collecting data about the compatibility implications. Perhaps it’s possible that Chrome could ship this with acceptable compat pain and move the web to using explicit APIs to indicate developer intent. Then other PE implementations could update without much trouble. I’m also willing to explore other ideas for mitigating the compatibility risk within our requirement that developers will be unlikely to accidentally incur per-move hit-test costs without explicitly opting into them.

scottgonzalez commented 9 years ago

This would only apply while a pointing device is down, correct? In the case of touch, you don't get move events unless there's a down action, so this would cover the existing Touch Events API scenarios. However, when using a mouse, or hovering with a stylus, you need hit tests to determine enters and leaves.

patrickhlauke commented 9 years ago

I think it should also apply to mouse (when a mouse button is pressed) and stylus (when a button is pressed/it touches the screen), as otherwise you'd end up with automagic pointer capture/different pointermove behavior based on input device, which would make writing input-agnostic code more challenging...

scottgonzalez commented 9 years ago

Yes, it would apply to all pointing devices. I was just clarifying that this is only the case when the device is "down." When the device is "up" we will default to a hit test on every move so that hover works correctly.

RByers commented 9 years ago

Right, sorry I should have clarified - what I really care about is touch dragging (since that's where direct manipulation style UIs are most natural, plus is the case that tends to matter most for performance). I agree it would make the most sense for stylus and mouse to be consistent with touch but if necessary for compatibility I could be convinced to give that up (and just encourage developers to be explicit on whether they want capture or not to avoid confusion).

scottgonzalez commented 9 years ago

It should definitely be consistent. I think a more accurate title would be something like "ponterdown should apply an implicit capture."

RByers commented 9 years ago

Unfortunately there's a little more to a solution than an implicit capture (even when captured, hit tests are required to send the pointerover/pointerout events).

patrickhlauke commented 9 years ago

So are these potentially two separate issues? An implicit capture, and then a change to pointerover/pointerout when captured (implicitly or explicitly)?

scottgonzalez commented 9 years ago

I recall discussions about wanting the over and out events during capture so that you can implement a button that captures, but changes state when you move out of it. This is generally how standard buttons work today.

scottgonzalez commented 9 years ago

So I guess the implicit capture needs to be defined differently than the explicit capture.

RByers commented 9 years ago

Yes, that's the direction I'd like to try (or maybe instead of treating implicit capture differently, setPointerCapture lets you specify whether you want over/out and implicit capture never asks for over/out). But this might be too breaking, and there are other possible solutions (I believe Jacob has some ideas of his own). So we probably want to keep this issue described in terms of the outcome we want to achieve, not how to achieve it (because we won't know that until we've done extensive prototyping and compat testing).

RByers commented 9 years ago

If we were to make all pointer events implicitly captured, what would we do about the compatibility mouse events? Today mouse events are (almost) always delivered to the same node as the pointer events. I think it would be problematic to break that. But it would probably be a huge breaking change for the web if mouse events suddenly became implicitly captured.

Perhaps the only pragmatic way out of this mess is to say whether or not an input device implicitly captures is a property of that device, perhaps exposed explicitly on InputDevice. That doesn't necessarily seem terrible to me, when the different really matters developers can always explicitly indicate their intent with the capture APIs. I also don't think it's entirely unreasonable to say that direct manipulation input devices should be implicitly captured, while indirect ones may not be.

patrickhlauke commented 9 years ago

Perhaps the only pragmatic way out of this mess is to say whether or not an input device implicitly captures is a property of that device

this seems appropriate to me as well. touch could have implicit capture, while pen (because of hovering stylus issues) and mouse would require explicit capture?

RByers commented 9 years ago

touch could have implicit capture, while pen (because of hovering stylus issues) and mouse would require explicit capture?

touch and mouse, yes. I'm less sure about pen. Technically it's a direct-manipulation input device. In chromium we're expecting to have two very different types of pen support. On Android pen will continue to be 'touch-like' (dragging scrolls and fires touch events), while on Windows it'll be 'mouse-like' (dragging selects text and fires mouse events). Perhaps the capture behavior should be coupled to which type of compatibility events are generated?

patrickhlauke commented 9 years ago

maybe philosophical, but is a stylus still a direct manipulation input when it's hovering (which still fires certain events, on supported devices)? because at that stage, your movements in the air are indirectly moving a separate cursor drawn on the screen...

patrickhlauke commented 9 years ago

perhaps it should only implicitly capture once it makes contact with the surface, and require explicit capture otherwise? or is that getting too granular/magic?

dfleck commented 9 years ago

And what about an opaque tablet? Not all “tablets” are on-screen.

RByers commented 9 years ago

maybe philosophical, but is a stylus still a direct manipulation input when it's hovering (which still fires certain events, on supported devices)? because at that stage, your movements in the air are indirectly moving a separate cursor drawn on the screen... perhaps it should only implicitly capture once it makes contact with the surface, and require explicit capture otherwise? or is that getting too granular/magic?

Oh yeah I've always expected "implicit capture" to take effect on contact only. Personally I wish we had separate events for hover-move and drag-move, but it seems fine to me for hover-move to be never-captured pointermove (as spec'd today) and drag-move to be implicitly-captured (but explicitly re/un-capturable) pointermove.

Big picture: there's nothing to capture to in hover scenarios. I think that's orthogonal to the specific type of input device (eg. a hover-capable touchscreen would behave the same way).

RByers commented 9 years ago

And what about an opaque tablet? Not all “tablets” are on-screen.

@dfleck, right those are definitely indirect manipulation.

I think the only possible justification for the stylus drag behavior (scrolling or text selection) is consistency with platform conventions. On Windows dragging an on-screen stylus doesn't scroll but on Android it does. So we should not try to over-specify this in the PE spec - it's a property of the underlying platform.

So perhaps we can just say that the platform conventions determine whether a pen is 'mouse-like' or 'touch-like', and then define the behaviors of those two models separately.

jacobrossi commented 9 years ago

I'm just spitballing, but I think it's probably possible (e.g. compatible) to have implicit capture for pen (in contact) while still having the varied platform behavior for default actions (Android: scrolling, Windows: selection). Or put another way, it's probably no more incompatible than making touch suddenly have implicit capture (which I'm still worried about, honestly).

RByers commented 8 years ago

@mustaqahmed just had a good suggestion. If we do implicit pointer capture in some cases, we should define it such that when an explicit setPointerCapture occurs during a pointerdown listener (the common capture case), there ends up being only a single gotpointercapture event (not one got for the implicit, then a lost and got for the explicit).

scottgonzalez commented 8 years ago

I hadn't thought through that scenario, but the suggestion makes sense and is definitely what I would have expected as a user of the API.

teddink commented 8 years ago

Makes perfect sense to me - having multiple events would be confusing and could cause other issues for interop if the implementations diverge for a time while we wait for ship vehicles and ship dates.

RByers commented 8 years ago

For reference this automatic capturing behavior is defined for iOS here, in particular:

Note: A touch object is associated with its hit-test view for its lifetime, even if the touch later moves outside the view.

Android is more complex and less well documented. The best overview I've been able to find is here. Basically when the first finger goes down, views can register their interest in the touch, including the ability to intercept future events (for that finger or additional fingers). Then movement events are sent only to the intercepting view, or views which explicitly registered interest (i.e. hit-testing is typically only done of the first down). The most interesting parts of this logic are implemented in ViewGroup.dispatchTouchEvent. UIScrollView takes advantage of intercepting so that in the common case of scrolling, events are dispatched directly to that view.

RByers commented 8 years ago

Here's a summary of the argument and data I presented on this at the implementation hackathon:

We (Chrome team) feel apps being able to deliver reliable 60fps JS-driven dragging on mobile devices is essential for the web to effectively compete with native mobile platforms. In such scenarios there's 16ms per frame to get work done, and we generally aim to leave at least 2/3rds of that for developer-written JS. That gives the engine a budget of 6ms per frame. Chrome Android data from the field (see below) indicates a median hit-test time of 0.5ms, which would be a substantial 8% of this 6ms budget. Worse, the 95th percentile is 6ms and the 99th percentile hit-test time is 20ms - so in many scenarios the hit-test time alone would make it impossible to meet this budget.

Therefore we feel it's critical that developers aren't subject to this cost unless they explicitly opt-into it by requesting a feature that requires it. It's possible that we could reduce this time dramatically by a complete re-write of our hit-test system, but that would be a major undertaking - probably delaying our ability to ship pointer events by at least a year. Even then we'd be unlikely to see such a huge improvement that we'd be comfortable imposing this penalty on the web when Android and iOS don't have such a design.

Chrome Android hit-test times graph

RByers commented 8 years ago

Also we discussed at the hackathon that the best way to proceed on this was probably to create a spec branch that includes the changes we want for this (implicit capture for touch and the "capture circumvents hit-testing" in #61). We have some PRs all ready, but it'll be easiest to discuss / tweak if we just keep these changes in a branch in this repo for now (with the same review process as for landing spec changes in master). @NavidZ / @patrickhlauke OK with you?

RByers commented 8 years ago

Split the discussion of changing mouse behavior out to #125 - if we take a long-term approach to that (as opposed to this v2-blocking bug) then maybe it's only a little insane?

patrickhlauke commented 8 years ago

happy to have it done in a branch, yes...would help to be able to see the proposed stuff in context

RByers commented 8 years ago

Created the reduce-hit-tests branch and wrote an initial PR for it: #129. Once this PR lands, I'll update the main README.md to mention the branch and contain a link to that version as well.

RByers commented 8 years ago

Landed, and added a note to the README