godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.16k stars 97 forks source link

Implement standard and configurable gestures across all platforms #4340

Open madmiraal opened 2 years ago

madmiraal commented 2 years ago

Describe the project you are working on

Creating interactive UIs including the Godot Editor.

Describe the problem or limitation you are having in your project

Godot currently has two gesture events: InputEventPanGesture and InputEventMagnifyGesture. InputEventPanGesture is generated by macOS and Android's “scroll” gestures. InputEventMagnifyGesture is generated by macOS' “pinch” gesture. Within the editor, InputEventPanGesture is used by: ItemList, PopupMenu (in 3.x) and RichTextLabel for scrolling and by ScrollContainer, GraphEdit, TextEdit and Tree for panning. Internally, the code editor uses InputEventMagnifyGesture to change the font size. The canvas item editor uses both to control zooming. The polygon 2D editor uses both to control zooming and panning. And the node 3D editor uses both to zoom, pan and rotate. However, in general, Godot currently uses hard-coded combinations of events instead of gestures to produce outcomes, which leads to inconsistencies (see #1507).

InputEventPanGesture and InputEventMagnifyGesture were added in 2017 in godotengine/godot#12573 as "A bit of an experiment" on macOS. Since godotengine/godot#12573 was merged, godotengine/godot#13139 has been open to implement these gestures on other platforms. godotengine/godot#25474 attempted to add these gestures to Android, but it was reverted in godotengine/godot#33536. godotengine/godot#36953 was opened to try again to implement gestures in Android. However there is also a platform independent approach: godotengine/godot#39055 and its 3.x version godotengine/godot#37754.

The problem is there is a difference between inputs, gestures and outcomes, and these concepts are often used interchangeably, which causes confusion. Inputs are the events generated by the user interacting with devices e.g. a mouse button being pressed or a screen being touched. Gestures are combinations of events that occur over time and are generally directed at something e.g. clicking the mouse over an item or tapping an item on the screen. Finally, users expect their gestures to have an expected outcome e.g. the item is selected.

While inputs are definitive, gestures are not. There are some de facto gestures e.g. clicking the mouse button. However, even with this gesture it is not agreed whether the click happens on the mouse button down or up (see Godot’s button’s ActionMode). Furthermore, the temporal nature of gestures and the differences between users’ capabilities makes defining them impossible e.g. the time and movement between two clicks differentiates between two single clicks and a double click. To accommodate different users, most operating systems make the maximum time and movement between clicks to define a double-click configurable. And these are just the problems with clicking and double-clicking, never mind more complex gestures. Therefore, any gestures we define in Godot should have configurable parameters.

Combinations of events from different sources are also expected to generate the same gestures, for example, clicking the left-mouse button or tapping the screen. The emulate mouse from touch and emulate touch from mouse project settings are used to try and workaround this issue, but this leaves issues like godotengine/godot#24589.

What outcome a gesture should have or what gesture or multiple gestures should produce a desired outcome is even more subjective. Even the press and move gesture has multiple, conflicting interpretations in the engine: box select or select and move (See #776). Therefore, regardless of what gestures are defined in Godot, each game should be responsible for the outcomes and what gestures are used to generate them. Within the engine, users should be able to decide what, if any, gesture produces an outcome.

The question is: What gestures should be defined in Godot?

Describe the feature / enhancement and how it helps to overcome the problem or limitation

Godot should implement a minimum set of standard gestures in the Input system and remove existing hard-coded gestures used by the platforms and defined in the engine code.

The solution should allow users to:

All platforms send their input events to the input system. The input system would combine these events to create the standard, expected gestures. Different combinations of input events from different devices can create the same gesture. Gestures should be named to reflect the action not the expected outcome. It’s worth point out that “Scroll”, “Pan”, “Magnify” and “Twist” are all outcomes not actions. To enable users to configure these gestures, the gesture parameters would be available in the Project Settings.

The following are a suggested minimum set of expected gestures:

For brevity, I won't detail the gestures' (configurable) parameters or their properties. These details should be reasonably self-explanatory, but should be agreed too.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

The best way to show how this will work will be through a PR. In the meantime, although I disagree with the details, and it doesn't include a lot of what is described here, godotengine/godot#39055 provides a good idea of the approach.

If this enhancement will not be used often, can it be worked around with a few lines of script?

Gestures are a combination of events, which users can use to create their own gestures in scripts. Combining events to create hard-coded gestures is the general approach used within the engine and OS code. However, as described above, this leads to inconsistencies and, when it's hard-coded it's inflexible.

Is there a reason why this should be core and not an add-on in the asset library?

Users have already implemented their own gestures as addons. For example: Godot Touch Input Manager. However, there are some standard gestures that users and the engine expect to be defined. Furthermore, there are gestures used within the editor or users expect to be available within the editor that need to be available internally.

grayhaze commented 2 years ago

I agree with everything proposed here, except that maybe Pinch could just be Multi-touch, as there's no real need to limit it to two touches. Using the correct math, it should be possible to implement pinching and twisting using any number of touches, and the gesture could provide the number of touches in case different actions were wanted for different numbers.

novalis commented 2 years ago

Gestures are a UI language, and the meaning of gestures is often platform-specific. A mouse click is not a tap, because a long-click is (on Windows and Linux) meaningless, while a long-tap on a mobile device often opens a context menu.

Even among mobile platforms, gestures are not standard. It's true that Android and iOS both have pinch-to-zoom. But In addition to the gestures that Android has swiped, iOS also has three-finger-swipe-left for undo (also, shake-to-undo). That's the thing that brought me to this proposal: I want to add undo to my game and have it be idiomatic. That's impossible on Android: there is no undo gesture. But it's possible on Windows and Linux (ctrl-z) and iOS (the gesture). It's also possible in HTML5, using horrible hackery.

Godot already has some of what @madmiraal would define as outcome-based events (ui_select, or NOTIFICATION_WM_GO_BACK_REQUEST). I think of these as "intent"-based rather than "outcome"-based. That is, the user intends to select the thing (or undo, or go back, or whatever). That may or may not work: you can't undo before you've done something, for instance.

So I think we should add a few more of these for the standard gestures (using the operating system's gesture recognizer, where available), so that we can have idiomatic controls, instead of trying to standardize across platforms.