Robbendebiene / Gesturefy

Navigate, operate, and browse faster with mouse gestures! A customizable Firefox mouse gesture add-on with a variety of different commands.
https://addons.mozilla.org/firefox/addon/gesturefy/
GNU General Public License v3.0
813 stars 74 forks source link

When to support these gestures like ‘↖↗↙↘’ #158

Closed baron0423 closed 6 years ago

baron0423 commented 6 years ago

Up to now composes gestures only four different directions,So my gestures are getting more complicated

uegajde commented 6 years ago

i worry it will increase the accuracy requirement of gesture. for example, i sometimes draw a "RDL" but what i want is "RL".

Robbendebiene commented 6 years ago

@uegajde Yep, thats the reason. I tried implementing it at the beginng, but it's not convinient if you have more then one direction (like RL or even more) Btw. you might be able to fix this by increasing the gesture sensitivity. But it has some side effects.

Celelibi commented 6 years ago

Enabling the diagonals could be an option. For those willing to make their gestures more accurate.

ayuanx commented 6 years ago

Diagonal gestures would be very useful. Just set tighter constraints on diagonal. e.g. if (30 < Angle < 60) then diagonal

Robbendebiene commented 6 years ago

I temporarily reimplemented diagonal directions and noticed the same problems as when I first tried implementing them. Especially when drawing curved lines or different combinations like "Down" > "Right" which then can often lead to "Down" > "Down Right" > "Right". Therefore I will not add them.

ayuanx commented 6 years ago

I am afraid you have to improve your algorithm, a lot. Here is an example that was written with AutoHotKey script that supports both horizontal/vertical stroke combinations and diagonal stroke combinations. http://lukewarm.s101.xrea.com/myscripts/mousegesture/index.html Since it is written with AutoHotKey script, it should be very easy to read. I think it could be a good reference to you.

PS: There are a lot of similar projects derided from the above one: https://autohotkey.com/board/topic/77584-mousegesturel/ https://autohotkey.com/boards/viewtopic.php?t=31859 http://hp.vector.co.jp/authors/VA018351/mglahk.html

Robbendebiene commented 6 years ago

@ayuanx Thanks, but I do not speak a single word Japanese.

I am afraid you have to improve your algorithm, a lot.

Feel free to teach me a better one, I'm always happy to learn something new.

ayuanx commented 6 years ago

It doesn't really matter, the code is written with plain script, you don't really need to understand a single word of Japanese. In fact, I've got an English translated version myself. https://github.com/ayuanx/AutoHotKey_MouseGesture

Celelibi commented 6 years ago

@Robbendebiene maybe all that's needed is a minimum segment length or minimum time spent drawing it before taking the direction into account.

If that is not enough, I can propose something: if enough people are willing to participate, we could set up a web page and ask people to draw some random given gesture and gather the data of what they drew. Then we can do some statistics / machine learning to tune the algorithm. And just for exploratory purpose, I would add some circular motions.

I can propose myself on the data analysis since machine learning is a big part of my current job (although I'm not an expert and would be glade to leave it to an actual expert).

I guess the bottom line is that I think you declined the idea a bit too fast. Maybe just put it on hold until someone come up with a good way to do it?

Robbendebiene commented 6 years ago

@Celelibi

maybe all that's needed is a minimum segment length

There already is a minimum adjustable segment length, but I don't like this approach because it forces the user to draw the gesture in a specific size.

time spent drawing it before taking the direction into account.

Same point of criticism as the segment size approach.

If that is not enough, I can propose something: if enough people are willing to participate, we could set up a web page and ask people to draw some random given gesture and gather the data of what they drew. Then we can do some statistics / machine learning to tune the algorithm. And just for exploratory purpose, I would add some circular motions.

I came across the machine learning idea too, but it's outside of my current abilities (not mentioning my lack of time). Also if I would implement it, Gesturefy should learn the gestures by itself. For example by a short training by the current user after the installation, or over time.

I guess the bottom line is that I think you declined the idea a bit too fast. Maybe just put it on hold until someone come up with a good way to do it?

You are right, but this is a feature which can only be released under a major version (e.g. Gesturefy 2.0), because it necesserly breaks a lot of gestures, users may currently use. For example the default reload page gesture can be drawn as a circle and is encoded in LDR, which then would be decoded in some diagonal directions. And to add diagonal gestures as an extra setting/option is not up for debate. But as I said, I'm still open to ideas.

Celelibi commented 6 years ago

I came across the machine learning idea too, but it's outside of my current abilities (not mentioning my lack of time). Also if I would implement it, Gesturefy should learn the gestures by itself. For example by a short training by the current user after the installation, or over time.

Training directly from the user is one big additional step. It would mean embedding the whole learning system into the extension. It should also be decided what parameter(s) to adjust. Count that machine learning usually needs about 10 to 50 training examples per parameter. Choose them carefully if you do not want to bore your user to death and still have a fully functional addon.

On the other hand machine learning could be used just to make a good algorithm for translating a gesture to a sequence of directions. Or possibly to find what recorded gesture is the closest to the just drawn gesture. The latter has the drawback that an ambiguous beginning displayed on-screen can be completely changed later in the drawing of the gesture.

Here are some questions I would like to answer with the data of many people's gestures. (As the data is explored, more questions will definitely arise.)

About single segment gestures:

About multi-segment gestures:

About circular motions:

From those statistics it could probably be decided if a hand-crafted algorithm could be devised and tuned. If not, they may help decide if we should use a decision tree, an SVM, a recurrent neural network, or something else, and if so, how? They may help decide if learning from the user is worth it, and if so, what should be learned.

And to add diagonal gestures as an extra setting/option is not up for debate.

Curious stance, but ok. It's your addon.

SebastianZ commented 6 years ago

Mouse Gestures Redox allowed diagonal gestures and it worked quite well back then. So, maybe have a look at how they did it?

A typical use-case for diagonal gestures would be zoom in/out via ↘ and ↖.

Sebastian

SebastianZ commented 6 years ago

@Robbendebiene I tried myself on a gesture recognition including diagonal directions which uses a relatively simple correction algorithm based on segment length:

https://jsfiddle.net/SebastianZ/f4u28d9v/

I think it handles errors sufficiently well. Feel free to implement it in your extension.

Sebastian

Itchiii commented 6 years ago

@SebastianZ Thanks a lot for your implementation. There is understandably a small mistake when drawing under 30 degrees. Then the track looks something like this: ↗→↗→↗→↗→. This may result in unwanted commands. We will take a closer look at the next days/weeks. (Robbendebiene has no time at the moment)

SebastianZ commented 6 years ago

@Itchiii:

@SebastianZ Thanks a lot for your implementation. There is understandably a small mistake when drawing under 30 degrees. Then the track looks something like this: ↗→↗→↗→↗→.

Yes, at 22.5 degrees. Gesturefy has the same problem at 45 degrees, obviously. It's hard to handle that correctly, because it's at the border between the two directions.

One solution I can think of is to recognize when the angle is at the border between two directions and increase the tolerance dynamically for the first recognized direction a bit to avoid the toggling between the two. I.e. instead of 45 degrees the current direction would take up e.g. 50 or 55 degrees. Though that may make it harder to recognize gestures like ↗→.

Sebastian