but it does not provide a mechanism for applications to query if an XRInputSource does support a primary action.
Is there such a mechanism, and if not, what is the recommended approach
for applications to distinguish between auxiliary and primary input sources?
Usecase description:
hand tracking on Quest OS does support select events, so hands are a "primary input source" there.
hand tracking on Vision OS does not support select events, so hands are an "auxiliary input source" there.
we can emit wrapped events on Vision OS based on thumb-index distance, but then we risk sending duplicated events on Quest OS (both the wrapped event and then the system event).
Potential workaround:
treat all sources as auxiliary, these potentially emit wrapper events
once a source has received a selectstart or squeezestart event, mark it as primary and stop emitting wrapper events.
While this would kind of work, it still has a risk of sending duplicate events the first time.
The spec just states the definitions of auxiliary and primary input sources:
but it does not provide a mechanism for applications to query if an XRInputSource does support a primary action.
Is there such a mechanism, and if not, what is the recommended approach
for applications to distinguish between auxiliary and primary input sources?
Usecase description:
select
events, so hands are a "primary input source" there.select
events, so hands are an "auxiliary input source" there.Potential workaround:
selectstart
orsqueezestart
event, mark it as primary and stop emitting wrapper events. While this would kind of work, it still has a risk of sending duplicate events the first time.