Open filip-sakel opened 1 year ago
Also attaching the swift interface Gesture
definition:
@available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
public protocol Gesture {
associatedtype Value
static func _makeGesture(gesture: SwiftUI._GraphValue<Self>, inputs: SwiftUI._GestureInputs) -> SwiftUI._GestureOutputs<Self.Value>
associatedtype Body : SwiftUI.Gesture
var body: Self.Body { get }
}
Can a custom Gesture
conforming use any DynamicProperty
, such as @Environment
?
Yes, it appears so. Though, state is reset when the gesture ends (you can only change it when the gesture is updating). I think that’s why _makeGesture
takes a graph value.
Can’t wait for this feature to be part of tokamak :P
quick question, would each renderer deliver the necessary information for gestures at a given target? such as GTK? Web and so on?
such as
and gestures would be built on top of it? in the TokamakCore layer?
It will probably depend on specific gestures. In some early experimentation, I found that Safari exposed rotation/scale gestures but not multi-touch events to implement these gestures by hand. Thus, at least some renderers will require implementing gestures on their own. However, even if not required, renderers should expose system gestures when possible, following Tomamak's philosophy of using mostly native functionality. However, if two targets, and by extension their renderers, do not support some gestures, we could implement the business logic in TokamakCore to avoid code duplication between renderers.
initial PR-adding support can be found here https://github.com/TokamakUI/Tokamak/pull/538
Based on work I did here I would like to start brainstorming and ask for some input/help on a few topics.
Firstly let us start with AnyGesture<Value>
, we need this type of erasure for Gestures
. I've attempted it many times, however, each time I've got blocked by the Gesture.Body
. Is there a smart way of doing it? Anyone can help?
We need some way of blocking gestures if we captured gestures already, in a subview. HTML doesn't provide such functionality. Listeners are delivered to any of the receivers. Following up, we need to handle GestureMask
to enable and disable them accordingly. Plus if the view is disabled then the gesture should be too. Meaning communication needs to be done both ways, up & down the view tree.
Finally, the last thing I require assistance with is Transactions. As They are not working for GestureState and animation isn't happening like it is for SwiftUI. TODO can be found here. Follow up the code from there.
.gesture(LongPressGesture(minimumDuration: 2)
.updating($isDetectingLongPress) { currentState, gestureState, transaction in
gestureState = currentState
transaction.animation = Animation.easeIn(duration: 2.0)
}
.onEnded { finished in
self.completedLongPress = finished
})
this code animates with SwiftUI but it doesn't with Tokamak
This issue expands on the tap-gesture feature request, setting a timeline for complete gesture-support parity with SwiftUI.
Level 1: Tap Gesture & Related Protocols
Button practically uses a tap gesture behind the scenes, so there should be no surprises in explicitly introducing
onTapGesture
. Starting from this simple gesture, we could build the infrastructure for the rest of the gesture API. Namely:Gesture
is a protocol that enables rudimentary composition and is the basis for reacting to gesture interactions.AnyGesture
isGesture
's simple type eraser.GestureState
is key to reactivity. It is updated through theupdating(_:body:)
function which returns aGestureStateGesture
gesture. I'm not sure if this could be implemented through the other mapping methods (onChanged
andonEnded
), because IIRC these mapping methods fire at different points in the view lifecycle or gesture interaction. Finally, I imaginemap
would be easy to implement, as it only changes the gesture's value.gesture(_:)
. To reduce the initial implementation's complexity, we could omit theincluding mask: GestureMask
parameter.Level 2: Many Gesture Types
High-level gesture recognition (like pan, rotation and pinch gestures) is free for native apps, but would likely need a custom implementation on the web. The pointer-events API seems like a good place to start. Besides this guide for implementing the pinch gesture, I didn't find a thorough guide for gesture detection in my brief research. At this point we may want to specify which gestures a given element can accept through CSS, though not every gesture type is available on all major browsers. The following gesture types would need to be recognized:
SpatialTapGesture
would be a refined implementation ofTapGesture
. The gesture would provide the taps' locations as its value by employing the pointer-event API. Namely, it would expect a pointer down and subsequent pointer up event to fire.DragGesture
requires a pointer down, pointer move (potentially with a minimum distance required), and a pointer up to end the gesture.MagnificationGesture
andRotationGesture
are multitouch gestures. They require one or more fingers on touch devices; research is required on how they'd be detected with a trackpad input; I also don't know how SwiftUI handles this for devices with just a mouse input (maybe through scrolling and a modifier key?). I think both of the aforementioned gestures could be implemented by constructing a vector between two fingers; magnification would measure if the vector's magnitude grew (at least by theminimumScaleDelta
); rotation would measure if the vector's principal argument changed (at least by theminimymAngleDelta
). I don't know how more than two fingers would affect the results.LongPressGesture
starts with a pointer down event. It waits forminimumDuration
before firing, and eagerly terminates if a pointer move exceeds themaximumDistance
. This gesture can also be attached through oneonLongPressGesture
method; the other methods with the same name can be safely ignored because they're only available on Apple TV.Level 3: High-Level Control
The following modifiers are used for advanced gesture interactions. After Level 2, which is far into the future, we could start tinkering with how different gestures combine together on our custom gesture-detection engine.
gesture(_: including:)
, where a mask controls precedence for the view's and subviews' gestures. Perhaps, the mask control could be passed down through the environment or directly through the Fiber reconciler. Then, masking could change the priority of the gestures.highPriorityGesture(_:including:)
could probably be implemented by a also changing internal gesture priorities.defersSystemGestures(on:)
is probably difficult to implement on the web; more research is required.ExclusiveGesture
is a gesture where each of the provided sub-gestures fires independently. The gesture's value is either the first or second sub-gesture value. This would likely be implemented by polling both sub-gestures. The first sub-gesture to fire would propagate its value to the exclusive gesture, and polling for the sub-second gesture would stop. After the first sub-gesture ends, the state would be reset.SequenceGesture
waits for the first sub-gesture to fire before polling for the second one. Using the same principle asExclusiveGesture
, the first sub-gesture would be allowed to complete, and then the second one, for the sequence gesture to fire. The gesture's value is either just the first sub-gesture value, or both sub-gestures' values.SimulaneousGesture
/simultaneousGesture(_:including:)
allows its sub-gestures to fire concurrently. Both gestures are continuously polled, making the simultaneous gesture's value equivalent to(First.Value?, Second.Value?)
.