Closed getwilde closed 7 years ago
This would be great -- I've been considering this last few days also.. as I'd like to offer both options with my main UI.
So for example, could a small collider be attached the index finger of the hand model and used to trigger Unity UI on collision, would that be possible do you think? I suppose it's like being in permanent click mode, there would be no hover state... which is possible with the laser pointer.
Hover state is more-or-less a freebie in Unity GUI, so it might be worth leveraging? I guess it depends on how we think button presses should work...
My personal thought is that it's probably desirable to have the GUI button press occur on trigger press. I know that's contrary to real world, but it essentially eliminates any chance of accidental GUI button presses. In the real world, you can brush against an elevator button without actually pressing it. But what's an equivalent in VR? I really don't want my user moving her hand and accidentally triggering everything she passes through (and then becoming terrified to move her hands lest she hit something).
So... maybe the user puts her index finger on/in/through the GUI button -- as though she's touching it -- and then presses the trigger to press the button. In my mind's eye, that feels appropriate. Incidentally that's also the same essential action as Grabbing and Using in VRTK... And in fact, once that's working, all of those actions could be set to the Trigger button, even further reducing the barrier to entry for non-gamer folks.
Just thinking out loud....
@getwilde -- your comments make sense. Agree that it may be better to use the controller trigger press to actually initialize the interaction.
Where things start getting a little confusing (for me at least) .. is say you are holding something in your right hand and need to interact with a object holding in your left hand. I have this use-case in my project now.. so the user is spraying on the wall and they want to change the color for example. In this instance then perhaps the laser pointer is still the best option.
I've been thinking more about this interaction. Today Nathie released a review of London Heist, showing some interactions with buttons (GUIs?). Two things I noticed: 1) The hand model is aware that it's close to a button/GUI element and changes pose to finger point, 2) the user presses the trigger to actually interact with the button... ...same way as to grab an object.
I think this concept is brilliant. It's straightforward and elegant, and requires only one button to point/poke/jab UI elements as well as grab objects. I think we'll start to see this mechanic more often.
https://gfycat.com/ResponsibleUnacceptableAmericanbadger https://gfycat.com/CreativeEmbellishedAstrangiacoral
+1 for this type of interaction system.
Would really allow for some more VR-centric interfaces while still leveraging all the strengths of the Unity UI system.
Just found "Thread Studio" by Shopify. They did such a good job with it. Same idea... mesh hand pose changes based on proximity to type of control (ie "neutral" versus "ready to grab" versus "ready to poke"). There's a "hover" or "highlight" state when controller is colliding, but before trigger is pressed. And then to actually grab or poke, the trigger is used.
https://gfycat.com/BareAnyImperialeagle
(Not shown in this GFY, but you can hold the card deck in one hand while you point at it with the other hand. It all feels very natural.)
So essentially, we can say:
Hover
and emit Hover
event - which could then change the state of something like a hand rig.Click
and emit Click
eventI think it's like this:
Adjacent
event which allows state change of something like hand rig Hover
and emit Hover
event Click
and emit Click
event .
.
Incidentally... for those last two, InteractableObject already behaves similarly:
Hover
~= Highlight
Click
~= Grabbing
or Using
Yes, this would be very useful. @getwilde's last comment makes sense to me as far as functionality is concerned. Using a customizable-radius overlapsphere (with option to set a source transform) for Adjacent
, or something similar, would probably work well.
Would probably also make sense to add the Adjacent functionality for VRTK_InteractableObject
as well.
PS: OMG YES DO IT NAO!!!!
@thestonefox -- are you planning to tackle this feature, any idea if / when you'll get started? Let me know if you need any support... I will if I can.
I am planning on doing it, but I can't say when. hopefully soon.
The problem with it being a hobby project is I never can guarantee time working on things.
OK, thanks for confirming -- in terms of difficulty, how hard do you think this feature is? Reason I'm asking is that I'm getting close to needing this feature in my current project as well, so am wondering if / how I can help.
This sounds fantastic and pretty much ideal for my anticipated usage case.
If possible could the trigger and hover features (mentioned by @getwilde) be optional? If the hover isn't required, could just touching an object actually action the click event (rather than pressing the trigger as well)? For push button interfaces (think a calculator) that would be a more natural fit for VR first-timers than having to click the trigger too.
One last request is the ability to be able to drag & drop interface elements onto others (with a snap to position mechanic) that fires events too. Is that do-able in this PR?
@tntfoz raise a different issue for dragging and dropping as it's a separate thing.
Okay bud shall try that now...
I'm not entirely sure the best way of doing it yet. I need to put my thinking cap on
I'm going to try and put some time aside to look at this next week
Thanks @thestonefox. Can't wait!
I've had a bit of a crazy idea. may work, may not....
But the canvases get a Collider added to them so they can stop the pointers going through the canvas.
What if two new colliders were added (trigger colliders) that just listened for the entry of the controller and upon the controller entering it, it just turned on the UI pointer raycast (set it to Always On mode).
then the direction of the ray would naturally select the button as if you had pressed the activation button down.
Then a second collider closer to the button z level would then listen for the collision of the controller, and upon that collision it would be the same as pressing the UI Click button (in fact the second collider could be the existing collider)
So something like this:
Interesting idea @thestonefox -- definitely worth a try! Do you think this could work with more complex interaction on UI components, like sliders for example?
Also perhaps worth noting - I've had some pretty weird behaviors / quirks with object colliders and UI recently... so I'm not sure if adding a stack of them would cause Unity to go mental... guess we'll have to try and see. :)
I'm going to change how the UI canvases register for the UI Pointer as well.
At the moment the UI Pointer searches for all valid UI canvases and converts them. This is really limiting that all UI canvases have the same options that are set on the UI Pointer.
I'm going to have it so you have to apply a new VRTK_UICanvas
script to any canvas that you want to interact with, this will then set up the canvas as done already.
This way you'll only need to add the VRTK_UICanvas
to a script to make it compatible (at run time too) and to ignore it, you'll just remove the VRTK_UICanvas
and it will no longer be a valid canvas.
This will also mean that you can choose which canvases are activated by collision and distance to turn on pointer can be different per canvas.
It also means the Ignore Canvas with Tag or Class can be removed, because you're implicitly saying which canvas you want to be on or off.
This sounds great.
A couple of months ago, I ran into issues where a canvas wasn't reacting to UI_Pointer. Turned out to be becauseit wasn't activated at runtime. So i had to manually register it with UI Pointer. But then there was a timing issue where it couldn't be done on Start and had to be done in a coroutine.
Anyway... it did work, but this sounds much cleaner. :)
Yeah it's a much better idea. I've implemented it now, I'm just going to push it up on the PR with all the UI stuff.
Updated PR with new UICanvas script
On a related note... there was also an issue where that script added a collider to the canvas, but it was way too thick. (My canvas scale was 1,1,1 so a 10-unit deep collider was enormous). I got around it by adding my own collider so the script didn't have to. Anyway, I wonder if a better approach would be to use one of those "required component" directives and just throw an exception if the developer hadn't added a collider?
Would this be a good approach generally? There have been a few times where I've been surprised by ingame behavior and had to step through code to discover that components were being added automatically. I dunno... I can see pros and cons to both approaches.
Yeah using the [RequiredComponent] can work better. the problem with the collider is that it needs specific positioning and the way it works is basically if you haven't added your own then it adds it and auto positions it for you.
If you use RequiredComponent you cant set defaults that the dev can then override.
Which means you'd set a default in the script that would always override your info because you don't know if the collider has been added via RequireComponent.
Yes, good points re Required Component versus AddComponent. Thanks.
So I've been testing this PR. A few things I've hit:
What line did you comment out (line 300 doesn't exist for me anymore).
Also, you can now add a custom transform to the UI pointer (and world pointers e.g. simple pointer) that you can determine the position and rotation of the beams coming from the controller.
I need to probably try a capsule cast but my feeling is it will be very chunky and probably cause many mis-presses.
My current thinking is some way of rotating that custom transform in real time to be pointing the direction you care about (you can do it, it's just how do you do it generically that suits all use cases) but if you wanted to program it yourself then you could.
Here's what I commented out: controllerRenderModel = VRTK_SDK_Bridge.GetControllerRenderModel(controller.gameObject);
Re CapsuleCast: I'm not too concerned about mis-presses, but that's because I was very worried about mis-presses (haha) and have chosen to require a trigger pull to Click. (For the same reason, my controllers pass through InteractableObjects, and only Grab or Use on trigger pull.) But yes, I can definitely understand the concern for devs who don't take that approach.
i'm guessing the reason that line failed is controller isn't a thing and therefore can't have a gameObject.
strange, it shouldn't fail, unless you don't have a ControllerEvents
script on the same controller that the UI Pointer is on.
You can replicate the issue in Scene 32. Just add an EventSystem
in your heirarchy, and add a VRTK_UIPointer
to Controller (right)
BTW, that custom transform on the raycast is slick.
I have this working. Overall it's really cool, and feels more immersive.
A couple other observations. I don't know how important they are.
Not sure if there are any other use cases for either of these items but I wanted to throw them out there.
Really nice work, @thestonefox.
Item 1 would be great. Item 2 went over my head but that's because I haven't figured out how you implemented this yet. You mean, it's not just pure magic?
Another gotcha I discovered today: Sometimes InteractableObjects need a Poke pose (ie a 3D mesh button, or small object you want to Use but not grab). Likewise, GUI elements need a Grab pose (ie an image to be dragged and dropped). Suggestions? Maybe a small script on objects that essentially says "I'm an InteractableObject but poke-able" or "I'm a GUI element but grabbable"?
UPDATE: Maybe I just check for (IsUsable==true && IsGrabbable==false)? And leverage that new VRTK_UIDraggableItem you created.
UPDATE 2: On InteractableObjects, I suppose it's not safe to assume IsUsable will always equate to a poke. So perhaps a "InteractableObjectPokeableItem" script is best. (And there's probably a better term than Poke and Pokeable, haha.)
You could find out if a ui element is draggable by the on pointer enter event tells you the element you've entered, check that game object for a draggable component.
Perhaps that info could also be put into the event payload
I mostly have this working now. As mentioned in Slack channel, it would be super-helpful if UIPointer exposed a Click
event, which custom controllers could listen for in order to fire "Poke" or "Grab" animations as appropriate. To support both 'Grab' and 'Release' animations, it would be ideal if UIPointer also exposed ClickDown
and ClickUp
events (which happen regardless of ClickMethods
enum value).
Thanks for all your work on this @thestonefox.
I've been through this. Tried to test most everything. I think it's good?
One gotcha that devs might hit with GUI elements:
ClickMethods.Click_On_Button_Down
, a "Grab" animation can be played when Click is raised ClickMethods.Click_On_Button_Up
, a "Release" animation can be played when Click is raisedIf a dev wants to do both, it will require extra work: listening to AliasUIClickOn
(or Off), determining if a UI element was beneath it, etc.
Maybe devs can use Draggable instead, which does offer Start and End events. Or maybe UIClickDown
and UIClickUp
events will be exposed as part of enhancement 686.
At any rate, good work @thestonefox. 👍
@getwilde Would you say the PR is good enough to merge now?
Yep, I think so!
Wow, bit late (2 yrs) in the party, just getting onboard the VR development effort. I desire to be able to touch, press and interact with GUI elements (sliders, scrollbars, etc). Example 34 in the VRTK kit just show case the UI Pointer in action but not the scenario you guys discussed in this thread. I was able to get the button working by throwing in the interactable object script and adding a collider to the button but short on luck to get the slider/scrollbar to work. Appreciate any help with a working sample. Thanks
@nasirrehan Discussions like these are better in because you can instantly get answers instead of waiting on here. It's also way easier for troubleshooting in general. GitHub issues are only useful for proper bug reports with steps to reproduce in an example scene like the Issue Template requires.
Thanks!
Allow users to touch, press, and interact with GUI elements (such as buttons) via controller meshes... as an alternative to laser pointer.
Background info: As discussed on Slack, the emerging trend seems to be that if elements are within arms length, the user ought to be able to poke at and otherwise interact with them using their finger/hand (or similar custom controller mesh), rather than a twitchy laser pointer. (I've seen laser pointer cause confusion in my own usability testing.) Also, it was mentioned how hover events perhaps ought to be triggered when the controller is within a couple of centimeters, and @thestonefox suggested that a spherecast or capsulecast be used around the controller.