Unity-Technologies / EditorXR

Author XR in XR
Other
927 stars 166 forks source link

Handedness in VRInputDevice is too simplistic #49

Open leith-bartrich opened 7 years ago

leith-bartrich commented 7 years ago

I'm not a lefty, but I do expect my software to work well for lefties.

I should not be restricted to hard-coding an input mapped control to a left or right VRInputDevice.

I should be able to map to "Dominant" and "Non-Dominant" hand as well.

I should be presented with: Any, Left, Right, Dominant and Non-Dominant at a minimum.

You should allow the user in the editor to set their handedness. And in runtime mode, I should be able to set the handedness of the player programmatically such that I can expose it to them via UI.

One would expect Dominant and Non-Dominant maps to switch devices based on a change to the handedness. But that Left and Right will remain the same regardless of the player's handedness.

While considering this I'd consider other likely future needs.

I'd consider that you're not really tracking a hand at all. In the future, we're likely to have a tracked (anatomical) hand and also a tracked controller in that hand as separate things. Hierarchy and nomenclature should reflect this likely expansion of capability.

I'd consider accessibility here. Especially since VR is being adopted in the medical field. The assumption that these are attached to hands, is probably not a good assumption. It should be explicitly defined in the map rather than assumed by the map. Even if current VR hardware and SDKs don't allow this just yet.

I'd consider tracked objects and controllers that do not correspond to anatomy or correspond to alternative anatomy. A tracked keyboard. A tracked keypad on the wrist. Etc.

amirebrahimi commented 7 years ago

Thank you for your feedback.

Our initial design accounted for what you are talking about. We haven't locked design into hands and especially since we can see future devices allowing other parts or even non-body parts to be tracked. There are actually plans to break VRInputDevice out into many different classes. The tags property on the class are strings to allow for any device to report it's nodes however it likes. For practical purposes, we are using left/right as it was a quick way to make use of the input-prototype with minimal effort.

However, you raise a good point about dominant/non-dominant. Would you expect those to be at the Action Map level in addition to left/right or were you thinking that would be at the application level for determining what left/right means in the action map?

leith-bartrich commented 7 years ago

Clarification:

ActionMap holds the data as to intent: Any, Left, Right, Dominant, Non-Dominant.

Player/User exposes user's handedness: Lefty, Righty

Complication: Different OSes and Platforms will handle handedness differently. Some platforms may leave it up to the application to keep track of a User/Player's handedness. Others may make it a system/login wide preference. So, at an Application API level, I probably need to be able to read it, to query whether I am allowed to change it, and to request a change to it. Such that I can provide the user UI to change it when appropriate. The input system should respect it. And if I make a request to change it, and the underlying system doesn't allow such a request from an application, then my request should result in no change.

However, I'd suggest thinking more in terms of the way Rewired works, for layers.

https://www.assetstore.unity3d.com/en/#!/content/21676

The layers on note there are:

Player Action - the thing to be done Input Behavior Controller - the device Controller Maps - how to connect the device to the actions Map Categories - meta Layouts - meta

Of note here is probably the complete separation of Action from Controller and the very comprehensive auto-assignment settings system by which it handles many different platforms and controller types. And further, that the Controller Maps are both settings built into the application, and also meant to be programmatically controlled at runtime by the player. Such that I can provide pre-defined platform specific maps, and also allow the user to choose and edit them if I provide such UI for them to do so.

The general gist being, that the actions are purely abstract. The controller is concrete. The controller map is what binds them in a logical manner. And the auto-assignment logic is cognizant of the fact that there are many platforms, controllers and maps which need to be intelligently assigned to one another at runtime. And further, custom logic may be required and provided by me at application level to do so.

For comparison, what's really strange about your ActionMap system, is that I'm creating a single object, an ActionMap, that contains actions, controllers, and controller maps all bound together at once. Where really they should probably be separate and bound at runtime. I should be defining the actions separate from the actual description of how a controller activates those actions. and they should be bound last minute by intelligent selection based on platform amongst other runtime criteria.

I realize I could probably implement MyTool.actionMap.get to switch action-maps based on platform. But then again, your system doesn't seem to want me to build multiple ActionMaps for the same purpose. Nor does it want me to subclass platform specific ActionMaps from a platform agnostic ActionMap.

If such a system were in place, where I could define many ActionMaps that implement the same actions, and control how they're applied at runtime, I might be able to do "handedness" myself the same way I would handle multi-platform controller swapping. But without such a system in-place, I can't put that kludge in place either.