maine-imre / handwaver

A gesture-based mathematical making environment from the University of Maine
http://www.imrelab.org
8 stars 2 forks source link

Embodied User Input Overhaul #14

Closed camden-bock closed 5 years ago

camden-bock commented 5 years ago

Is your feature request related to a problem? Please describe. When we are developing for LeapMotion Orion, we have three problems. One, we often use similar gestures for multiple applications but re-implement the gesture each time. Two, we leave OSVR Control overrides behind to clean up later. Three, we need a universal system for visual and audio feedback for control use. InteractionBehaviours for general movement already support OSVR controllers.

Describe the solution you'd like We want our abstraction to have three layers. A base layer that lays infrastructure for connecting left hand to left OSVR controller, etc. Also, button access for OSVR and basic PUN RPC support. A middle layer would define gestures in terms of both the LeapHands and OSVR controllers. This layer would also handle audio and visual feedback, so this is consistent (e.g. an open palm has the same color feedback and same audio feedback in any context). A top layer would extend this class to the specific implementation. This would send the specific RPCs as needed and interact with the kernel or other tools as needed.

For example:

IMRE_OneHandedGesture --> IRME_Point --> PointToSelect

Additional context

camden-bock commented 5 years ago

Move this to the gesture folder

https://github.com/maine-imre/handwaver/blob/f81a51fd47dd95e13c2a1e360121533b19c755d8/Assets/Scripts/pointingGesture.cs

camden-bock commented 5 years ago

Move this to the gesture folder

https://github.com/maine-imre/handwaver/blob/f81a51fd47dd95e13c2a1e360121533b19c755d8/Assets/Scripts/RSDES_GestureArcs.cs

camden-bock commented 5 years ago

Move this to the Gesture Folder

https://github.com/maine-imre/handwaver/blob/f81a51fd47dd95e13c2a1e360121533b19c755d8/Assets/Scripts/worldScaleModifier.cs

camden-bock commented 5 years ago

Base layer is created, requires PUN support.

Joey-Haney commented 5 years ago

Gesture implementations seen within the Gesture folder: "AxisSnapGesture.cs", "AxisSPinGesture.cs", "HWSelectGesture.CS", "LatticeLandEraseGesture.cs", and "LatticeLandPoint.cs" are all old implementations which will be deleted upon completion of the new gesture system.

Joey-Haney commented 5 years ago

Potential Design: Replace some pin functionality and pin menu with gestures within Geometer's planetarium:

Placing a pin on the earth will be replaced by a PointAtGesture which places a pin in the location where the finger used to execute the gesture intersects the surface of the earth.

The menu system of the pin can have all button functionality replaced with gestures as well: Latitude lines and longitude lines can be activated or deactivated with a swipe in their general directions intersecting the pin with the hand in an orientation that is aligning the fingers roughly parallel to the lines.

Tangent planes can be made with a swipe gesture which has the palm facing the pin, roughly aligned with the plane which the tangent plane would inhabit when visible.

Light Rays can be activated by pinching the pin and pinching with the other hand in empty space and then stretching. With a tolerance applied to make the stretching required allow for intentional mode switches, the light rays could be moved through much like scrolling through option on a website with a scroll wheel.

camden-bock commented 5 years ago

@Joey-Haney @reneyost123 @Vincent will work to get an initial set of gestures refactored and debugged. @Joey-Haney can provide the initial structure of each of these methods. All work is on the Feature/AbstractGesture branch.

@reneyost123 You'll work with Joey on this to get the following interactions debugged in the dynamic earth scene.

@Vincent You'll work with Joey to get gestures up and running in HandWaver.

Update the checklist above as you complete individual items, and comment with progress as you go.

camden-bock commented 5 years ago

Compile errors have been fixed and we've merged in the bugfixes from the develop branch.

Point to select is ready for testing in the HandWaver base scene.

camden-bock commented 5 years ago

I'm going to mix things up here.

I want to abstract our input system to free us from LeapMotion.

In particular, I want to make two structs: one for a generic hand and one for a tracked body. (the hand would be part of the body). We can then write all of our gestures with this struct as an input and have our own "InputManager" that switches the input data source (rather than accommodating multiple data sources within each gesture file.)

camden-bock commented 5 years ago

We should also note the accessibility constraints of body tracking.

camden-bock commented 5 years ago

Add to ToDo - RSDES Gesture Arc

camden-bock commented 5 years ago

Massive Gesture Cleaning happening now - sorry

camden-bock commented 5 years ago

Add to ToDo = pinch to drag / grasp.

camden-bock commented 5 years ago

Where possible everything has been added to the new gesture system.

Things to note - buttons are not working in RSDES scene, they have been commented out of related scripts.

We don't currently have a concept of IsGrasping - I would suggest that we develop a gesture that triggers grasp and then search a list of gestures to see if anything is invoking grasp.

We still need to connect data providers to drive the system. If @VinDangle or @reneyost123 are eager for a challenge over the break that might be fun.

Joey-Haney commented 5 years ago

Where are these functionalities now?

Sandbox

- Double Pinch to Scale (refactor)  --> analog (WorldScaleModifier in dev)
- Double Pinch to Stretch (refactor) --> **new functionality, moving from LM system, take out of 
- Erase

MasterGeoObj (remove update check in MGO and call MGO abstract fn stretch())**

Space

- Point to select a location on earth --> drafted
- Open Palm Push [Earth]  --> **new functionality**
- Swipe to Rotate Earth --> **new functionality**
- Pointing to Select Location on Earth  --> drafted
- Swipe gestures at pin in plane to toggle  --> all call functions on RSDESPin, no existing gesture support.  **Consider using switch**
    - tangent plane, --> toggleHorizonPlane
    - terminator, --> toggleTerminator
    - latitude, --> ToggleLat, ToggleAzimuth
    - longitude --> ToggleLong, ToggleAltitude
- Double Pinch and Stretch to cycle starlight at pin  --> RSDESPin, ToggleStarField
- Double Pinch to Scale (refactor), special case for RSDES (integrate with RSDES Manager)

Lattice Land

- Point to Trace in Lattice Land --> analogous in dev branch
    - Trace and Fill polygon in LatticeLand --> analogous in dev branch
- Swipe to delete lattice land --> analogous in dev branch
camden-bock commented 5 years ago

Thoughts on grasp - a dynamic system for grasping figures. We should implement this for pointat as well.

[A] Tolerances for pinch strenght. Pinch strength is higher to start grasp than to release. (that is it is harder to release than to grasp). Pinch strength tolerance might dynamically update with velocity.

[B] Consider the two objects (of a type) that are closest to your fingertips object A is a meters away and object B is b meters away. Object A is the closest object.

We have three tolerances that are ratios AboutEqualTolMin AboutEqualTolMax MuchCloserTol

We have two tolerances that are fixed OuterRadius InnerRadius

We want to determine if we can allow the user to pick up object A inside the outer radius (as if you were using the 'force' to bring it to your hand a short distance away).

So assume a < b and a < OuterRadius

if a < innerRadius and AboutEqualTolMin < a/b < AboutEqualTolMax, pickup the lower dimensional object of A and B. (This is the normal, current behaviour with LM)

if a > innerRadius and a/b < AboutEqualTolMin, pickup A (This is normal, current behaviour with LM)

if innerRadius < a < outerRadius and a/b < muchCloserTol, pickup A. (Assume user intent and teleport object to hand)

if innerRadius < a < outerRadius and a/b > muchCloserTol do nothing. (normal)

if a > outerRadius do nothing (normal)

nsgnn commented 5 years ago

Change of gesture for global scale change to be similar to hololens interaction for scaling. Create a square with your thumbs and pointer fingers, expand and shrink the "rectangle" created by your hands to scale the scene. This can also have a simple visual representation when the gesture is active. This would allow the user to differentiate scale from expand dimension. See me if my description is unclear.

camden-bock commented 5 years ago

Note that we may abandon PUN support

camden-bock commented 5 years ago

We are going to rename the systems here.

At the base level, the Body Input system takes sensor data and puts it into a generic struct. At an intermediate layer, the Input Classification system takes body input data and classifies into gesture types. At the highest layer, the Input Event System listens for input classification events and calls functions to attach functionality to the event. The highest layer interfaces with the GeoGebra Interface class - which makes calls to the GGB server.

Joey-Haney commented 5 years ago

embodied user input?

nsgnn commented 5 years ago
Joey-Haney commented 5 years ago

this is the doc which should be updated as things go along. Repetition will exist between the comments and this document, but this doc should in theory be crafted more concisely and be updated as time goes on. https://github.com/maine-imre/handwaver/blob/feature/gesture-abstraction/docs/EmbodiedUserInput/SystemDescription.md

Joey-Haney commented 5 years ago

Chars are not allowed at present with the burst compiler, they are going to be in the future though. Strings will also not be usable here due to their general lack of blittability. Currently this is problematic for our thoughts on how to send commands to the geogebra server.

Joey-Haney commented 5 years ago

Here is a stream of consciousness (I tried to add lines to split some things up a little and provide areas where one might take a breather but it is still pretty rough) which covers things which have been floating around in my mind for the past week+. Generall this will be thoughts on ECS:


Entities hold instances of components, systems operate on entities which possess certain components to add, remove or update component data on those entities.

This also means that components can be used as tags to get into certain systems which have functionality between these related entities. For instance: We have 5 fingers on each hand. To have a system working on the fingers, there could be an empty component called finger which is then sought by a system when determining which entities it should work on, and then all the entities that system will work on are fingers. The same could be true of a system that needs to do something with an entire hand. Entities relating to the hand will have the empty hand component. Maybe there will be two empty components which denote a hand, one for left and one for right, and these will be the filler for the finger example described above.


This leads into my thoughts for a different way of creating the ECS system we have now. The current system has one entity for the entire body and uses booleans, nests data, and internal functionality to determine grouped data and some other things. I think we could go to a more pure ECS implementation and potentially earn more efficiency. We could have each part of the body as an entity. Where there are empty components which determine what entities are grouped together. If there is a need to know certain entities, we can add additional empty components as labels. (for instance needing to know that a finger is a thumb or an index finger) This could also be achieved by a single component which holds an int corresponding to the current indexes we have defined for each of the fingers.

there were concerns about having all the body data in the same memory chunk, this can be guaranteed by having a sharedcomponent added to all the body entities. A sharedcomponent allows all the entities it belongs to to have some data which is the same between all of them, and the Unity implementation will make sure that they are kept in the same memory block as a result. A memory block is 12 Kilobytes. This will be something we need to keep in mind. It won't be a problem for the foreseeable future though.


One problem with the current system which has been identified is that we need to be able to store a string or indexed characters to send a command to geogebra. This could potentially be resolved by making the calls it needs within a system based off of component conditions rather than the action authoring which is in the works at the moment. This needs more thought though as I do not understand the geogebra side of things and I do not have much knowledge on what limitations exist within systems beyond the fact that certain data types do not play nice with unmanaged memory spaces and the burst compiler (like chars which are not supported by burst "at the moment" which indicates the intention to support them in the future)

There have been some examples of ECS uses where I have seen systems using multiple subsets of entities. The one I have in mind is from Code Monkey's Youtube tutorial which looks at implementing the ability of units within a game to find a target. The two subsets of entities were units and then targets. This is exciting in the sense that gesture implementations will not always just use fingers or hands or something They may involve multiple areas of the body. One vision is to have an outstretched arm with extended fingers and see the body rotation in its entirety, AKA the person holds their hand out and spins, this could be used for manipulating figures or rotating along the approximate axis of the body.


The current system has to keep track of when a gesture is activated, cancelled, finished, or ongoing but not activated last frame. These things could be tracked by components. This also is a shift from gestures being components to gestures being systems with component markers. Body entities will hold onto what gestures they are part of through empty components and systems will look at the relevant body parts to determine if the gesture should be activated, left ongoing, finished, or cancelled. Finishing and cancelling a gesture will remove the empty component from the body part while starting a gesture will add it and continuing a gesture will let it remain. The necessary actions that should be taken as a result of if the gestures are continuing or not will become jobs. There is the question of how to make gesture priorities happen and I need to look into what you can do with a batch of jobs. If we could go through the jobs and know certain bits of data about them we could maintain the gesture priorities at the job level. Otherwise we would have to worry about priorities within the systems which are making jobs for gesture actions and would open up new problems. I have figured out a little about dependency ordering which happen automatically within Unity's entity management but I need to read into this more to see if this will have any functionality within how we do things with our own priority needs.

The example for where our priority comes in is: If you are trying to grasp something but it also looks like you could be pointing at something, we want to give priority to the grasping.

Joey-Haney commented 5 years ago

The debugging checklist and tables have been tested on the current state of gestures and there have been several notes made to improve this. Refer to all fingers not being extended as a fist. This is more efficient language and communicates the same thing less awkwardly. Add a state of finger extensions to the list of finger extensions states that specifies all fingers to help make the "some fingers" point not have as much weighing on its interpretation. The tables which outline the possible comboes of speed, direction and fingers for open palm push and open palm swipe can be combined and reduce the actions needed to test these by 27 instructions to a person testing things. (50% less due to there previously having been two tables with the same data that only had slightly different intended columns)

Joey-Haney commented 5 years ago

In trying to get push and swipe working with Vive pro gesture recognition, I have determined that setting the speed tolerance to 0.36 makes it activate with any motion of an open palm, however 0.37 makes it never activate (in this case push). Previously with the tolerances only being used with leap motion, the speed tolerance was 0.5

Joey-Haney commented 5 years ago

I have been forgetting to reference the issue in commits. I will be making an effort to remember now that I have been reminded

camden-bock commented 5 years ago

Here's a list of the gesture implementations that were requested, and issues they've been mapped to

These are the gestures that need implementing.

Joey-Haney commented 5 years ago

Compound Gesture thoughts: A way around having A gesture which needs to know the position of each hand when the gesture is activated, and throughout the gesture. This will be activated with a two handed gesture, and the data will be grabbed from the individual one handed gestures in a separate object that is created at the beginning of the gesture and destroyed when the gesture is no longer being used.