So far we have been using a more or less complex mechanism to detect the device we are running on to figure out which exact 3D controller models we should render. We are aware that this is not a good practice in the OpenXR world but in a way is something that we inherited from FxR and is not a trivial change, so we have been delaying it for quite a while to avoid breaking things up.
There are several issues with that approach:
goes against the OpenXR philosophy and design, the whole API uses generic concepts that decouple your app from the specific hw it's running on, so you could focus on the "actions" instead of the exact "buttons" that trigger them.
it isn't forward compatible, new devices won't be supported by default until the proper detection code is added
Note that FxR and now Wolvic was not designed to be an OpenXR-only app, it's actually multibackend. This means that the current code that renders the models is not really tied to OpenXR but it's also used by other legacy backends like WaveVR. Hopefully we'll deprecated them sooner than later, but in the meantime we should ensure that the mechanism implemented for OpenXR is generic enough.
So far we have been using a more or less complex mechanism to detect the device we are running on to figure out which exact 3D controller models we should render. We are aware that this is not a good practice in the OpenXR world but in a way is something that we inherited from FxR and is not a trivial change, so we have been delaying it for quite a while to avoid breaking things up.
There are several issues with that approach:
Note that FxR and now Wolvic was not designed to be an OpenXR-only app, it's actually multibackend. This means that the current code that renders the models is not really tied to OpenXR but it's also used by other legacy backends like WaveVR. Hopefully we'll deprecated them sooner than later, but in the meantime we should ensure that the mechanism implemented for OpenXR is generic enough.