Open samamou opened 1 month ago
What we have now is the touch-related gestures (rub and brush/swipe) extracted from a capacitive touch array of either discrete (0 or 1) or continuous normalized values (floats between 0 and 1). The descriptor was made for the T-Stick which has one of those capacitive sensing options.
This descriptor uses a simple 1 DoF blob detection algorithm that scans the array and creates blobs by reading the sequence of activated stripes (array positions). This allows us to have multi versions of those gestures as it keeps scanning the array and finding all blobs, their sizes, and position (mean point). The rub and brush gestures integrate each blob's position to get their "speed" in a value close to their speed in cm/s. That was fine-tuned based on the distance between stripes and the free values available for the leaky integration (i.e., there's room for improvement).
What I suggest is to separate blob detection and the brush/rub classes should use a single float to calculate.
We can make classes for multi-brush and multi-rub that rely on blob detection for the specific cases of an touch array
Adding this issue after a conversation about improving the
touch
gesture with @edumeneses please elaborate/correct meRight now it involves single touch values
touchTop
,touchMiddle
,touchBottom
, andtouchAll
(designed for the specific use case, in this case, the T-Stick)Treat
touch
to a 1 DoF ?Generalize to handle an array of
touch
es ?multiRub
andmultiBrush
are currently initialized as fixed-sized array of [4], the logic can be updated to support dynamic arrays