A lightweight touch gesture recognition library created in C as a part of Georgia Tech's Spring-Fall 2022 Junior Design program.
Install build-essential to have access to make
and gcc
:
sudo apt update && sudo apt install build-essential
Install CMake:
sudo apt-get -y install cmake
ℹ️ Windows development is possible with tools like Chocolatey.
Clone the repository into your project.
git clone https://github.com/Russell-Newton/MultiTouch.git <Destination>
Include the source in your project
CMakeLists.txt
of your project, add the gesturelibrary
folder of the repository as a subdirectory using add_subdirectory
. Delete the section of gesturelibrary/CMakeLists.txt
in the SKIP_TESTS
if statement.gesturelibrary/include
folder and add the files in the gesturelibrary/src
folder to your executable.Clone the repo.
git clone https://github.com/Russell-Newton/MultiTouch.git
Build the CMake project.
cd MultiTouch
cmake -S gesturelibrary -B build -D SKIP_TESTS=true
Compile the library with make
.
cd build
make
Include the library when compiling your program:
-I...pathto/MultiTouch/gesturelibrary/include
to your compile command....pathto/MultiTouch/build/libGestureLibrary.a
to your compile targets.If build errors occur, make sure you have make
and cmake
installed and added to your path. Ensure that you have a C compiler like gcc
.
In Unix, make
and gcc
can be installed by running:
sudo apt update && sudo apt install build-essential
Other common build issues may be related to where the CMake build directory is located. Make sure you run make
from within the directory created by running cmake
.
<gesturelib.h>
and the header files for any estures you are interested in. For example, <tap.h>
and <drag.h>
.<gestureparams.h>
to your desired values. The variables can be set at runtime, but will require the gesture library to be reinitialized after modification.init_gesturelib()
.touch_event_t
s.touch_event_t
with your adapter and send it to process_touch_event()
.
event.group = TOUCH_GROUP_UNDEFINED
.To synchronously access recognized gestures,
get_[gesture]
function of the gesture you are interested in. For example, get_tap
and get_drag
.tap_t
and drag_t
.process_touch_event()
function, then the data in the array may change as you are reading it.To asynchronously access recognized gestures,
add_recognizer()
remove_recognizer()
enable_recognizer()
disable_recognizer()
const [gesture_t]*
and can read the data from the updated gesture. The gesture data will not change until the next invocation of process_touch_event
.Listeners are single functions that accept gesture-specific data and have a void return type. They are called whenever a
recognizer's state machine updates its internal state. A listener should be registered after calling
init_gesturelib()
.
Example:
// main.c
#include
<stdio.h>
#include
<gesturelib.h>
#include
<tap.h>
void tap_listener(const tap_t* event) {
if (event.type == RECOGNIZER_STATE_COMPLETED) {
printf("Tap received at (%.3f, %.3f)!", event.x, event.y);
}
}
int main(int argc, char *argv[]) {
init_gesturelib();
// register the new listener
set_on_tap(tap_listener);
// rest of program
}
After touch data has been transformed into a touch_event_t
and sent to our library, the library will perform some
additional preprocessing. If the event has its group set to TOUCH_ID_UNDEFINED
, the library will determine which touch
group it belongs to. If the device provides a touch group, the library will not assign one.
The touch group represents the finger a touch event was made by. That is, touch group 0 corresponds to events created by the first finger pressed, 1 to the second, 2 to the third, and so on.
Touch group assignment is determined by event type:
ℹ️ Group assignment ensures that fingers generate the same group as long as they're in contact with the touch device.
After the preprocessing has finished, a touch event is sent to every enabled recognizer in the order in which they were added to the library.
Gesture recognizers are built like state machines. They receive touch events and update their state. When the state is updated, they call on the registered event listener, if applicable.
Builtin single-finger gesture recognizers save data about every possible touch group that could be performing the gesture they recognize.
Builtin multi-finger recognizers are more complicated and store data about every possible group for every possible user id. User id is set by the data adapter and could be determined by factors like which device received the touch or where on the screen the touch was received.
⚠️ All touch events with the same uid will be considered as part of the same multi-finger gesture for recognition purposes.
Gesture recognition starts with a base gesture: stroke. Any other gestures can be recognized by composing and performing additional processing on strokes and other composite gestures.
Stroke is a simple gesture with a simple state machine:
The state updates are less important than the data that stroke collects. Stroke collects data on:
When creating more complicated gestures, having access to this data can be incredibly useful.
Multistroke is a multi-finger counterpart to stroke. All strokes with the same user id get grouped into the same multistroke. The first down event starts a multistroke, and the last up event for the user id ends the gesture. In addition to the information contained in each stroke, a multistroke also tracks:
To perform a tap, press down and release within a short time and without moving too much.
Tap is a simple gesture that contains information about where and when the tap was started and released. If the time between start and release is too long or the distance too great, the tap will fail.
To perform a double-tap, tap twice in close succession.
Double-tap stores the same information as a tap.
To perform a hold, press down for a longer amount of time before releasing.
Hold stores the same information as a tap.
To perform a drag, press down and move your finger across the screen.
Drag tracks starting position, current position, and current velocity. Current velocity is retrieved in the same fashion as stroke.
Like multistroke, multidrag is a multi-finger counterpart to drag. The same logic that applies to multistroke applies to multidrag. It stores the same information as multistroke, but has a slightly different state machine and property calculations.
Multidrag is used for processing zooms and rotates.
To perform a zoom, press down with at least two fingers and move them closer together or farther apart.
Zoom tracks how many fingers are involved in the gesture and an estimated zoom factor.
To perform a rotation, press down with a least two fingers and revolve them around a common center point.
Rotate tracks how many fingers are involved in the gesture and an estimated rotation amount.
set_on_<gesturetype>()
.
get_<gesturetype>()
.TOUCH_GROUP_UNDEFINED
.Russell Newton 💻 📖 🚇 🚧 |
Wenjun Wang 💻 📖 🚇 ⚠️ |
jrdike 💻 📖 ⚠️ |
Iftekherul Karim 💻 📖 ⚠️ |
deborahsrcho 💻 🎨 🖋 |