Open aavogt opened 2 years ago
Thanks for filing an issue. As you've noted, blank-canvas
currently does not support touchmove
(or other touch*
) events. If I'm reading this correctly, this event would only fire if a user moves a finger across a touch screen, correct? If so, this could be somewhat difficult for me to debug locally, so I might need to rely on your feedback.
A couple of questions:
touch*
events is? I ask since blank-canvas
bundles jQuery v1.7.2
, and I can't find any mention of touchmove
or the like in the source code for this jQuery version. Did you need to upgrade the version to make this work?touchmove
events?blank-canvas/static/index.html
, but it's not clear to me where exactly you applied that change. If you have a diff, that would be helpful.Thanks!
middleware = []
.touchend
event has no coordinates in the e.originalEvent.touches[0].pageX
expression. mouseup
gives coordinates for the previous mousedown
, so touchend
should give the corresponding touchstart
's coordinates.While the hardware supports multiple touches, two fingers get interpreted as zooming. I couldn't figure out replacing .bind with the newer .addEventListener according to https://stackoverflow.com/a/54738343. If zooming can be disabled, e.originalEvent.touches[1].pageX
etc. would be useful to have in haskell too.
I did not upgrade jQuery
Good to know, thanks. I wasn't quite sure what that line of code in jQuery was doing in the first place, so it's nice to know that it's not essential for touch*
events to work.
I was using a laptop with a touchscreen
Ah, OK. Unfortunately, neither of my devices have a touchscreen. (I have a laptop with a touchpad, but that's not quite the same thing.) I don't suppose there's any other way to mock touch*
events?
This commit: https://github.com/aavogt/blank-canvas/commit/72e0f439738931d8ea5821349e6f444345148e77 is the change I suggested above.
Alright. Do you have a small example program that you're testing this out on, by chance? I ask since having a reasonably sized, standalone test program would make it far easier to debug the issues you're experiencing. (Of course, I'd have to figure out if I can even get touch*
events to fire at all, but that's a separate matter...)
This program uses my changed blank-canvas and just prints out events on stdout https://github.com/aavogt/blank-canvas-touch-example
OK, I was finally able to figure out a way to reproduce this locally by using a phone + USB debugging. And indeed, if I run your example with blank-canvas-0.7
Hackage, I can see the touch*
events being handled, but without corresponding ePageXY
values. Here is my attempt at filling in the coordinates for these events:
+ if (e.originalEvent.touches != undefined
+ && e.originalEvent.touches.length > 0
+ && e.originalEvent.touches[0].pageX != undefined
+ && e.originalEvent.touches[0].pageY != undefined) {
+ o.pageXY = [e.originalEvent.touches[0].pageX, e.originalEvent.touches[0].pageY];
+ }
+ if (e.originalEvent.changedTouches != undefined
+ && e.originalEvent.changedTouches.length > 0
+ && e.originalEvent.changedTouches[0].pageX != undefined
+ && e.originalEvent.changedTouches[0].pageY != undefined) {
+ o.pageXY = [e.originalEvent.changedTouches[0].pageX, e.originalEvent.changedTouches[0].pageY];
+ }
This is enough to get all touch*
events to have coordinates, at the very least. I'm not 100% convinced this will always give the most accurate results, however. In particular, the docs for touches
and changedTouches
mention that they both interact with the touchstart
event, so it's possible that the code above will overwrite o.pageXY
twice on touchstart
events. Perhaps there needs to an intervening else
between the two if
statements? I'm not entirely sure at the moment.
Let me know if the code above works for your use case. If so, we can explore how to refine it (and eventually upstream it into blank-canvas
).
Your code works. But I only need the changedTouches part: if somebody else needed touches
they could look at older changedTouches
.
Here I support multiple touches https://github.com/aavogt/blank-canvas/commit/d3f7fe4ef3f8989eb24c682b0b3d5deef652e168 . We could break existing code in the following way: that module's Event
could be renamed EventS
. Then using splitEventS
defined below, simultaneous touches will be seen as successive by blank-canvas.
splitEventS :: EventS -> [Event]
splitEventS (EventS metakey pagexy ...) = getZipList $
Event <$> pure metakey <*> ZipList (map Just pagexy) <*> ...
It would also be nice to access the other properties of Touch such as force radiusX radiusY and rotationAngle.
Should I prepare a pull request with the following Event or something like the Event in here, or something else?
data Event = Event { eMetaKey :: Bool
ePageXY :: Maybe (Double, Double),
eTouchID :: Maybe Int,
eRadiusXY :: Maybe (Double, Double),
eRotationAngle :: Maybe Double,
eForce :: Maybe Double,
eType :: EventName,
eWhich :: Maybe Int
}
Great, I'm glad to hear that that works for you. I'd welcome a pull request to add this functionality, keeping in mind the caveats below.
One concern I have with adding support for touch-based events wholesale is that they don't appear to be portable across all browsers. In particular, this table suggests that they don't work on Safari, which is a pretty widely used browser. I don't think this is a dealbreaker, but I do think we should be careful to guard any JavaScript code that manipulates touches such that it doesn't break browsers that don't support them. We should also be sure to document this limitation in the Haddocks.
As far as what the API should look like, I'm wondering if we should put all of the touch-specific properties into their own data type. Something like this, perhaps:
data Touch = Touch
{ touchPageXYs :: [(Double, Double)]
, touchIDs :: [Int]
, touchForce :: Double
, ...
}
data Event = Event
{ eTouch :: Maybe Touch
, ... -- all other fields of Event unchanged
}
This is still technically a breaking change, since we're adding another field to the Event
data constructor. On the other hand, as long as people use the field selectors in Event
, it shouldn't be too disruptive of a change. This way, code that doesn't care about touches won't have to update their use of ePageXY
. There is a bit of duplication in that both Event
and Touch
will have their own XY coordinates, but I think this is OK. After all, they're subtly different kinds of coordinates, so it seems appropriate to have different fields for them.
https://github.com/ku-fpg/blank-canvas/blob/39915c17561106ce06e1e3dcef85cc2e956626e6/static/index.html#L168
Following https://stackoverflow.com/questions/4780837 , I can get touchstart and touchmove events with coordinates and touchend without coordinates provided I add the following to Trigger().
I am not sure why I lose the mousemove events if I leave out the
(e.type == "touchmove" || e.type == "touchstart")