Closed bricetebbs closed 4 years ago
I've been thinking about this a bit for PotassiumES (which is still very rough) as I'd like to give app authors unified hit testing and anchor APIs across the sensed environment and their virtual scenes. So far, it's felt OK to do that sort of synthesis in the app layer. Once I have WebXR Viewer exposing anchors I'll have a better feel for the subtleties.
I understand the general idea behind this issue but I am having a hard time identifying what are the problems developers will face. App/virtual level ray casting is usually synchronous and IMO should happen once the async real world hit test results come back, before deciding on how to place objects. Is the objetive of this issue to provide an example so developers are shown a way to handle this scenarios mentioned?
It may turn out that all we need is a clear explanation or example. I just wanted to raise the point that as we look at ways to handle the reticle case under async behavior we need to remember that developers will be doing hit testing of virtual objects that need to be in sync with that. If all we need to do is give them guidance to wait for the async hit test to be resolved and look at the current value of the coordinates/frames then it should be fine.
I agree we need an example that does this. I think the right approach would be, as judax says, to do your homebrew hit-test in the .then() block of the native hit-test and then compare the distances. That should provide the right synchronized behavior.
Closing - we should already hand out sufficient information for the apps to perform hit test against virtual objects during RAF callback.
Issue #3 is related but not exactly the same. There we (correctly IMHO) decided that hit-test would not try to interact directly with virtual objects that are placed in the world but that hitTest would only return results of hits on real world objects detected by the system.
Applications will want to do hit tests of their virtual objects that are consistent with the hit testing the API is doing for real world objects. For example, an application may not want to display a reticle when the screen location the reticle is tracking is covered by a previously placed virtual object.
Given some of the complexity around async etc as indicated in #31 we should make sure there’s enough information/explanation for applications to be able to do hit-tests on their virtual objects that work the same as those on real world objects.