mrdoob / three.js

JavaScript 3D Library.
https://threejs.org/
MIT License
102.81k stars 35.38k forks source link

Adding 'image-tracking' feature support for the ARButton module #22682

Closed ShirinStar closed 2 years ago

ShirinStar commented 3 years ago

Hi, I was wondering if there is a way/intention to add support for the 'image-tracking 'webXR API feature? 🙌🏾

Mugen87 commented 3 years ago

Are you talking about https://github.com/immersive-web/marker-tracking/blob/main/explainer.md?

ShirinStar commented 3 years ago

Yes! Exactly! I managed to work with the hit-test, but for some reason, the image-tracking isn't working for me- i keep getting that "The required feature 'image-tracking' is not supported". I'm relatively new to threejs, so it can definitely be because I don't fully understand how to integrate it correctly, but since the hit-test worked smoothly for me, I thought it could be a feature problem. Have you worked with the WebXR image-tracking in threejs?

Mugen87 commented 3 years ago

Not yet. But as far as I can see it should be possible to integrate the feature without changing the library. According to the sample code of the draft, the AR button should be created like so:

const button = ARButton.createButton( renderer, {
    requiredFeatures: [ 'image-tracking' ],
    trackedImages: [
        {
            image: imgBitmap,
            widthInMeters: 0.2
        }
    ]
} );

The code from the onSessionStarted function can be placed in an event listener:

renderer.xr.addEventListener( 'sessionstart', () => {

    const session = renderer.xr.getSession();

    const scores = await session.getTrackedImageScores();

    let trackableImages = 0;

    for ( let index = 0; index < scores.length; ++ index ) {

        if ( scores[ index ] == 'untrackable' ) {

            MarkImageUntrackable( index );

        } else {

            ++ trackableImages;

        }

    }

    if ( trackableImages == 0 ) {

        WarnUser( "No trackable images" );

    }

} );

And the code from the animation loop can be implemented like so:

function render( timestamp, frame ) {

    if ( frame ) {

        const results = frame.getImageTrackingResults();

        for ( const result of results ) {

            // The result's index is the image's position in the trackedImages array specified at session creation
            const imageIndex = result.index;

            // Get the pose of the image relative to a reference space.
            const pose = frame.getPose( result.imageSpace, referenceSpace );

            const state = result.trackingState;

            if ( state == "tracked" ) {

                HighlightImage( imageIndex, pose );

            } else if ( state == "emulated" ) {

                FadeImage( imageIndex, pose );

            }

        }

    }

    renderer.render( scene, camera );

}

Assuming render() is defined as the animation loop like so: renderer.setAnimationLoop( render );.

@ShirinStar Do you mind giving this approach a try and report your findings? 😇

ShirinStar commented 2 years ago

Hi @Mugen87! Thanks so much for this! Huge, huge help!! It took me a while to understand it doesn't work on ios, so the 'missing feature' error I kept receiving was because I am developing on a mac. I then added https://github.com/liriliri/eruda as a mobile debugger for Android, and it works!

Here is a full code sample: https://github.com/ShirinStar/webAR_experiments/tree/main/16-webxr-image_tracking

(and so sorry for the late response!! work got in the way:) )

fabian-muff commented 2 years ago

Hi @Mugen87 and @ShirinStar This works quite well on android smartphones. However, it does not work yet on HoloLens, since it is an experimental feature and the edge does not yet support the image-tracking, nor the camera-access features. Has one of you an idea if this is going to be an official part of the webxr api? Or will there be a specific three.js functionality/module to support image-tracking?

Mugen87 commented 2 years ago

Has one of you an idea if this is going to be an official part of the webxr api?

Not sure but it seems the spec is still in draft mode and thus marked as unstable. Your probably want to ask your question at the respective GitHub repo.

wyliefoxxx commented 10 months ago

It does seem to work great on Android, but sadly, for now at least, only when the experimental flag is enabled:

chrome://flags/#webxr-incubations

I figure this is a good thing to mention in this thread. Please, anyone, correct me if I'm wrong.

I feel like we all might need to figure out the best way to nudge it off the experimental flag list as a community - it's a painfully powerful feature to be in a so-close-yet-so-far state. https://chromestatus.com/feature/6548327782940672

orioncho45 commented 10 months ago

Hi @Mugen87! Thanks so much for this! Huge, huge help!! It took me a while to understand it doesn't work on ios, so the 'missing feature' error I kept receiving was because I am developing on a mac. I then added https://github.com/liriliri/eruda as a mobile debugger for Android, and it works!

Here is a full code sample: https://github.com/ShirinStar/webAR_experiments/tree/main/16-webxr-image_tracking

(and so sorry for the late response!! work got in the way:) )

Hi @ShirinStar Can you please tell us the device that this image-tracking project is working on. Because I have been in trouble to find perfect device on where it works.

fabian-muff commented 10 months ago

This seems to be a privacy problem on HMDs. Meta Quest does not and Apple Vision Pro will not allow access to the camera stream for custom computer vision algorithms. In my opinion this is a crucial feature to allow for XR applications. So far however no one dares to do the first step.