Closed RByers closed 8 years ago
This would only apply while a pointing device is down, correct? In the case of touch, you don't get move events unless there's a down action, so this would cover the existing Touch Events API scenarios. However, when using a mouse, or hovering with a stylus, you need hit tests to determine enters and leaves.
I think it should also apply to mouse (when a mouse button is pressed) and stylus (when a button is pressed/it touches the screen), as otherwise you'd end up with automagic pointer capture/different pointermove behavior based on input device, which would make writing input-agnostic code more challenging...
Yes, it would apply to all pointing devices. I was just clarifying that this is only the case when the device is "down." When the device is "up" we will default to a hit test on every move so that hover works correctly.
Right, sorry I should have clarified - what I really care about is touch dragging (since that's where direct manipulation style UIs are most natural, plus is the case that tends to matter most for performance). I agree it would make the most sense for stylus and mouse to be consistent with touch but if necessary for compatibility I could be convinced to give that up (and just encourage developers to be explicit on whether they want capture or not to avoid confusion).
It should definitely be consistent. I think a more accurate title would be something like "ponterdown should apply an implicit capture."
Unfortunately there's a little more to a solution than an implicit capture (even when captured, hit tests are required to send the pointerover/pointerout events).
So are these potentially two separate issues? An implicit capture, and then a change to pointerover/pointerout when captured (implicitly or explicitly)?
I recall discussions about wanting the over and out events during capture so that you can implement a button that captures, but changes state when you move out of it. This is generally how standard buttons work today.
So I guess the implicit capture needs to be defined differently than the explicit capture.
Yes, that's the direction I'd like to try (or maybe instead of treating implicit capture differently, setPointerCapture lets you specify whether you want over/out and implicit capture never asks for over/out). But this might be too breaking, and there are other possible solutions (I believe Jacob has some ideas of his own). So we probably want to keep this issue described in terms of the outcome we want to achieve, not how to achieve it (because we won't know that until we've done extensive prototyping and compat testing).
If we were to make all pointer events implicitly captured, what would we do about the compatibility mouse events? Today mouse events are (almost) always delivered to the same node as the pointer events. I think it would be problematic to break that. But it would probably be a huge breaking change for the web if mouse events suddenly became implicitly captured.
Perhaps the only pragmatic way out of this mess is to say whether or not an input device implicitly captures is a property of that device, perhaps exposed explicitly on InputDevice. That doesn't necessarily seem terrible to me, when the different really matters developers can always explicitly indicate their intent with the capture APIs. I also don't think it's entirely unreasonable to say that direct manipulation input devices should be implicitly captured, while indirect ones may not be.
Perhaps the only pragmatic way out of this mess is to say whether or not an input device implicitly captures is a property of that device
this seems appropriate to me as well. touch
could have implicit capture, while pen
(because of hovering stylus issues) and mouse
would require explicit capture?
touch
could have implicit capture, whilepen
(because of hovering stylus issues) andmouse
would require explicit capture?
touch
and mouse
, yes. I'm less sure about pen
. Technically it's a direct-manipulation input device. In chromium we're expecting to have two very different types of pen support. On Android pen will continue to be 'touch-like' (dragging scrolls and fires touch events), while on Windows it'll be 'mouse-like' (dragging selects text and fires mouse events). Perhaps the capture behavior should be coupled to which type of compatibility events are generated?
maybe philosophical, but is a stylus still a direct manipulation input when it's hovering (which still fires certain events, on supported devices)? because at that stage, your movements in the air are indirectly moving a separate cursor drawn on the screen...
perhaps it should only implicitly capture once it makes contact with the surface, and require explicit capture otherwise? or is that getting too granular/magic?
And what about an opaque tablet? Not all “tablets” are on-screen.
maybe philosophical, but is a stylus still a direct manipulation input when it's hovering (which still fires certain events, on supported devices)? because at that stage, your movements in the air are indirectly moving a separate cursor drawn on the screen... perhaps it should only implicitly capture once it makes contact with the surface, and require explicit capture otherwise? or is that getting too granular/magic?
Oh yeah I've always expected "implicit capture" to take effect on contact only. Personally I wish we had separate events for hover-move and drag-move, but it seems fine to me for hover-move to be never-captured pointermove (as spec'd today) and drag-move to be implicitly-captured (but explicitly re/un-capturable) pointermove.
Big picture: there's nothing to capture to in hover scenarios. I think that's orthogonal to the specific type of input device (eg. a hover-capable touchscreen would behave the same way).
And what about an opaque tablet? Not all “tablets” are on-screen.
@dfleck, right those are definitely indirect manipulation.
I think the only possible justification for the stylus drag behavior (scrolling or text selection) is consistency with platform conventions. On Windows dragging an on-screen stylus doesn't scroll but on Android it does. So we should not try to over-specify this in the PE spec - it's a property of the underlying platform.
So perhaps we can just say that the platform conventions determine whether a pen is 'mouse-like' or 'touch-like', and then define the behaviors of those two models separately.
I'm just spitballing, but I think it's probably possible (e.g. compatible) to have implicit capture for pen (in contact) while still having the varied platform behavior for default actions (Android: scrolling, Windows: selection). Or put another way, it's probably no more incompatible than making touch suddenly have implicit capture (which I'm still worried about, honestly).
@mustaqahmed just had a good suggestion. If we do implicit pointer capture in some cases, we should define it such that when an explicit setPointerCapture
occurs during a pointerdown
listener (the common capture case), there ends up being only a single gotpointercapture
event (not one got
for the implicit, then a lost
and got
for the explicit).
I hadn't thought through that scenario, but the suggestion makes sense and is definitely what I would have expected as a user of the API.
Makes perfect sense to me - having multiple events would be confusing and could cause other issues for interop if the implementations diverge for a time while we wait for ship vehicles and ship dates.
For reference this automatic capturing behavior is defined for iOS here, in particular:
Note: A touch object is associated with its hit-test view for its lifetime, even if the touch later moves outside the view.
Android is more complex and less well documented. The best overview I've been able to find is here. Basically when the first finger goes down, views can register their interest in the touch, including the ability to intercept future events (for that finger or additional fingers). Then movement events are sent only to the intercepting view, or views which explicitly registered interest (i.e. hit-testing is typically only done of the first down). The most interesting parts of this logic are implemented in ViewGroup.dispatchTouchEvent. UIScrollView takes advantage of intercepting so that in the common case of scrolling, events are dispatched directly to that view.
Here's a summary of the argument and data I presented on this at the implementation hackathon:
We (Chrome team) feel apps being able to deliver reliable 60fps JS-driven dragging on mobile devices is essential for the web to effectively compete with native mobile platforms. In such scenarios there's 16ms per frame to get work done, and we generally aim to leave at least 2/3rds of that for developer-written JS. That gives the engine a budget of 6ms per frame. Chrome Android data from the field (see below) indicates a median hit-test time of 0.5ms, which would be a substantial 8% of this 6ms budget. Worse, the 95th percentile is 6ms and the 99th percentile hit-test time is 20ms - so in many scenarios the hit-test time alone would make it impossible to meet this budget.
Therefore we feel it's critical that developers aren't subject to this cost unless they explicitly opt-into it by requesting a feature that requires it. It's possible that we could reduce this time dramatically by a complete re-write of our hit-test system, but that would be a major undertaking - probably delaying our ability to ship pointer events by at least a year. Even then we'd be unlikely to see such a huge improvement that we'd be comfortable imposing this penalty on the web when Android and iOS don't have such a design.
Also we discussed at the hackathon that the best way to proceed on this was probably to create a spec branch that includes the changes we want for this (implicit capture for touch and the "capture circumvents hit-testing" in #61). We have some PRs all ready, but it'll be easiest to discuss / tweak if we just keep these changes in a branch in this repo for now (with the same review process as for landing spec changes in master). @NavidZ / @patrickhlauke OK with you?
Split the discussion of changing mouse behavior out to #125 - if we take a long-term approach to that (as opposed to this v2-blocking bug) then maybe it's only a little insane?
happy to have it done in a branch, yes...would help to be able to see the proposed stuff in context
Created the reduce-hit-tests branch and wrote an initial PR for it: #129. Once this PR lands, I'll update the main README.md to mention the branch and contain a link to that version as well.
From https://lists.w3.org/Archives/Public/public-pointer-events/2015JanMar/0041.html: