microsoft / MRDesignLabs_Unity_Tools

This repository contains the tools leveraged for our Mixed Reality Design Lab examples in Unity.
MIT License
61 stars 30 forks source link

InputEventHandling #12

Open FlorianJa opened 7 years ago

FlorianJa commented 7 years ago

I am having a question. Why does the MRDesignLab use another InputEventHandling System than the HoloToolkit? I am trying to combine parts of both repos and it does not work without inserting both InputManagers. Adding two InputManagers feeling weird/wrong.

For Example: I have a object with a boundingbox (MRDL) and a separat slider (HTK). Without inserting the InputManager from the HTK the ManipulatinEvent is not forwarded to the slider.

paseb commented 7 years ago

Hi @FlorianJa,

At somepoint in the near future we will be ratifying both input systems though require some refactoring in order for it to be more extensible. We wrote our input stack long before HTK had implemented the extensible input stack and geared much more towards abstracting any input for easy inclusion of sources as well as added support for multiple focusers. We are working to get a unified input stack into HTK that also supports MRDL and our input requirements as well.

So long story short we are working to have a unified system :).

thanks, -pat

FlorianJa commented 7 years ago

Thanks you for the quick answer and your awesome work.

StephenHodgson commented 7 years ago

What other input requirements does MRDL have? I wouldn't mind looking into making these changes in the HTK as a preemptive move to help facilitate adoption.

paseb commented 7 years ago

Hi @StephenHodgson,

As always, thanks for your input!

Here are the things off the top of my head:

      public struct InteractionEventArgs
        {
            /// <summary>
            /// The focuser that triggered this event.
            /// </summary>
            public readonly AFocuser Focuser;

            public readonly Vector3 Position;
            public readonly bool IsPosRelative;
            public readonly Ray GazeRay;

            public InteractionEventArgs(AFocuser focuser, Vector3 pos, bool isRelative, Ray gazeRay)
            {
                this.Focuser = focuser;
                this.Position = pos;
                this.IsPosRelative = isRelative;
                this.GazeRay = gazeRay;
            }
        }

There are a couple of differences in between HTK - InputManager and MRDL - InteractionManager that could be easily reconciled. Namely InteractionManager allows subscription to the specific events globally where InputManager you add a global listener then only implement the callbacks for the events you want. We've also abstracted an InputShell that can take a mapping to common event endpoints so that you can support input failover between sources.

I'm currently working on have a much more comprehensive comparison and tech design spec for reconciling the differences. As I mentioned we're actively working towards a more unified future :).

thanks, -pat

StephenHodgson commented 7 years ago

@paseb, what's the advantage to multiple focusers? Can you give an example of when to use one?

paseb commented 7 years ago

@StephenHodgson , yeah certainly.

The primary example is with 6DoF controllers. Each controller is a focuser and can have independent interactions with objects. We also find it's important to have a focuser that is considered the gaze focuser. This is helpful in targeting voice interactions towards what your gaze is on vs. using a 6DoF controller to point to something then issue voice commands.

The designer can use these inputs in conjunction or independently. Also if I have a right and left controller and pick up something with one controller I don't want the button release on the other to drop it.

The other consideration is with multi-user scenarios where the focusers for different individuals are visible and need to have discreet independent interactions.

thanks, -pat

paseb commented 7 years ago

@StephenHodgson do you guys already have the MR dev kits?

StephenHodgson commented 7 years ago

I have one, but haven't had the time to set it up with the 2017.2 beta. Also isn't there a close Unity beta we have to use? Idk, A few guys from Valorem should be at the MSFT campus today, I think they're doing the Academy stuff.

Thanks for the explanation, I think that really helps and goes a long way. I know there are lots of upcoming changes in the RS2 branch of the HTK.

paseb commented 7 years ago

Yeah there's an internal MRTP build that seems a lot more stable and functional currently.

Anyway, the original HUX codebase my team wrote was to support prototyping across all inputs and devices. So when you have 2 Vive, Oculus or Windows Motion Controllers as well as your head looking at things you need to have multiple focusers for rich interactions. Imagine making tiltbrush, fantastic contraption or job simulator without multiple focusers. We even used vive controllers over the network on a HoloLens just like some other awesome people on the interwebs ;).

thanks, -pat