immersive-web / proposals

Initial proposals for future Immersive Web work (see README)
95 stars 11 forks source link

Some sort of local shared space #82

Open AdaRoseCannon opened 1 year ago

AdaRoseCannon commented 1 year ago

Shared Anchors are not happening any time soon, what is another method we could use?

/facetoface to start some effort in this direction

AdaRoseCannon commented 1 year ago

An image marker printed on the floor would be the simplest approach but perhaps we can do something smarter?

HyroVitalyProtago commented 1 year ago

For phones, I've tried to so something like: showing an image on one, capture marker (image) position with the other one and sync spaces with webrtc, can share my wip if interested on glitch. But that's not compatible with vr/ar headsets... So a marker inside the space is probably the best solution for now!

AdaRoseCannon commented 1 year ago

After discussion we think it would be really useful to build an example of how to do this tom make it easier for non-experts to make. Maybe it could be an API if usage takes off.

3 tracked anchors from a shared image to maintain a shared reference space.

codynhat commented 1 year ago

Hello! Following along here. I am currently experimenting with building something that sounds very much like what is described here. We are trying to make it easier for people to build AR experiences that use an image as an anchor to place 3D objects around. I have it working using WebXR with the image-tracking feature flag and experimental flag enabled on Chrome for Android.

We are also trying to build a higher-level API to define content and anchors. We call the combination/package of all of these an augmented world. The API would essentially allow each component of the position and rotation to be anchored relative to a plane or image. This could be an image on the floor, with an object placed around it on the floor. Or also an image hanging on a wall, with an object placed on the floor in front of it. That latter is also using the plane-detection feature.

The experiment is not in a very shareable form at the moment. But I could clean it up and share it here if anyone is interested? Could someone explain a little more about what the goal is here? We would love to contribute an example if that would be helpful. We would also appreciate any feedback!

AdaRoseCannon commented 1 year ago

It would be very good to see your demo.

I think the main demonstration that we would love to show is a mobile device and a headset device working in a shared space and unfortunately image-tracking doesn't work well in headsets. In addition on mobile it's a heavy performance overhead.

Hit Test and Anchors are lowest-common-denominator features which are shared by both and would be a robust and performant starting point. And keeping the synthesized-space updated using the continuously updated anchor positions would be very powerful. Although mathematically a little more difficult.

Pretend there is a piece of paper with "A", "B" and "C" printed on them, like below, I think I would want the space to sit on B pointing out of the page and aligned with X along AB and Z along BC.
(I may have the right hand rule mixed up here)

A B
  C
codynhat commented 1 year ago

I will make sure to share a demo whenever it's ready.

I may be misunderstanding the goal here. How would a shared space be bootstrapped using only hit testing and anchors? Would each user select the same point in physical space and have an anchor placed there? Also, is the goal to have some dynamic state that is synchronized (like moving objects) or would static objects work?

AdaRoseCannon commented 1 year ago

Yes, only using hit testing and anchors, the users would select three points which lie in the same plane but on lines AB BC which are orthogonal to each other. In the same order (A -> B -> C) in the real world. This is enough information to generate an offset reference space which is the same for both users.

codynhat commented 1 year ago

Ok I think I understand. Thanks! I will try to experiment with this in the next few weeks.

AdaRoseCannon commented 1 year ago

Thank you so much!! It is really appreciated!

kfarr commented 6 months ago

I have done some work in this area and can create a demo video if helpful.

The method is roughly as follows:

While it's not elegant as it requires some manual steps on behalf of the user, it does work and it is repeatable across a variety of device vendors (Android / iOS / Quest / AVP)

AdaRoseCannon commented 6 months ago

My understanding is based on the Image Tracking discussions is that some devices don't track QR code markers well, Android iirc but this needs verification but otherwise yes this seems like a good idea.

klausw commented 6 months ago

The image tracking functionality as currently implemented in Chrome for https://github.com/immersive-web/marker-tracking/blob/main/explainer.md requires naturalistic images, it can't track QR codes or similar synthetic markers.

I had made an experiment tracking ArUco markers using Raw Camera Access on top of Chrome's WebXR AR mode, that did work: https://storage.googleapis.com/chromium-webxr-test/r1255390/proposals/camera-access-marker.html

klausw commented 6 months ago

Correction, it's possible to make hybrid QR codes that incorporate enough texture to also work as naturalistic images for use with the ARCore-based image tracking. See for example https://antfu.me/posts/ai-qrcode as an extreme case, but this may degrade how well they work for traditional QR code detection.

kfarr commented 6 months ago

Hi all, here is v1 progress on a proof of concept following the steps outlined above:

1) Creation of a QR code that has embedded longitude / latitude / elevation as querystring along with target hostname for application: https://glitch.com/edit/#!/bollard-buddy-qr-maker 2) Use mobile device to scan QR and localize webxr scene based on this 3) Fetch appropriate content given the long/lat/el. Now user may use application in mobile device WebXR AR mode https://glitch.com/edit/#!/bollard-buddy-ar 4) Send creation to desktop / VR https://glitch.com/edit/#!/bollard-buddy-mapper

Videos:

Part 1 - QR Code Marker Generator: https://github.com/immersive-web/proposals/assets/470477/7aba5c66-64f3-4adf-a34d-6e2fecc39c33

Part 2 - Mobile App (WebXR AR mode) using QR for localization: https://github.com/immersive-web/proposals/assets/470477/f202d49c-e84d-464d-a271-242621293ee8

Part 3 - Desktop mode mapper after AR app: https://github.com/immersive-web/proposals/assets/470477/9772a866-78e0-49bc-8534-e88c93648971

Part 4 - Send to VR (WebXR VR mode): https://github.com/immersive-web/proposals/assets/470477/f5c8ad33-515a-4717-859a-afe760a5e1e5

Discussion:

Potential improvements: