vladmandic / face-api

FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS
https://vladmandic.github.io/face-api/demo/webcam.html
MIT License
824 stars 149 forks source link

Context lost and Failed to link vertex and fragment shaders #129

Closed IgorSamer closed 1 year ago

IgorSamer commented 1 year ago

Issue Description

Hi @vladmandic! First of all, thank you for maintaining this awesome library!

I'm using Electron in a project for gyms and in it there is the option to turn on the webcam so that facial recognition is performed and the person is allowed or not to access the place.

When a gym employee turns on the webcam it opens in a BrowserWindow (camera.html) that is always visible on the screen while turned on and in it I use face-api.js for facial recognition and registration of new faces.

I initially developed the code on my MacBook and everything worked as expected. Then I tested it on a Windows 11 laptop and everything went fine too.

However, when testing on my Windows 7 PC, I noticed that sometimes the code works and sometimes it doesn't (in a completely random way) and when checking in DevTools I could see the following error:

Error

I also noticed (as in the screenshot) that the following message is shown in the console:

Could not get context for WebGL version 2

The above message is always shown (only in my Windows 7 PC), regardless of whether the code works or gives an error.

Previously I was using the face-api version from @justadudewhohacks and that's when I noticed this error. That's when I found your updated version and tried to use it, but the application behaves in exactly the same way.

Searching about the error "Failed to link vertex and fragment shaders" I saw that it could be something related to the hardware (GPU) or memory leaks and then I was in doubt, because if it was a hardware limitation the code wouldn't work in any of the attempts, right? And as I said before, there are times when It works and sometimes it doesn't, which leads me to believe that it could be a memory leak.

Even so, if it's a hardware limitation issue, what is the minimum recommended configuration to use face-api smoothly?

Steps to Reproduce

camera.html:

<!DOCTYPE html>
<html lang="en-US">

<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>face-api</title>
    <script type="text/javascript" src="../scripts/face-api.js"></script>
    <style>
        body {
            position: relative;
            margin: 0;
            padding: 0;
            display: flex;
            align-items: center;
            justify-content: center;
            overflow: hidden;
        }

        video,
        canvas {
            width: 100%;
            height: 100%;
        }

        video {
            object-fit: cover;
            border-radius: 10px;
            border: 2px solid #009688;
            box-shadow: 0 0 10px 0 #009688;
        }

        canvas {
            position: absolute;
        }
    </style>
</head>

<body>
    <video id="preview" autoplay muted></video>

    <script type="text/javascript">
        // importing ipcRenderer for communication between the renderer (this file) and the main process (Node.js)
        const { ipcRenderer } = require('electron');

        document.addEventListener('DOMContentLoaded', async () => {
            faceapi.env.monkeyPatch({
                Canvas: HTMLCanvasElement,
                Image: HTMLImageElement,
                ImageData: ImageData,
                Video: HTMLVideoElement,
                createCanvasElement: () => document.createElement('canvas'),
                createImageElement: () => document.createElement('img')
            });

            const video = document.getElementById('preview');
            const weightsDir = '../weights';
            let canvas;
            let context;
            let canvasSize;
            let detectorOptions;
            let labels;
            let labeledFaceDescriptors;
            let detections = [];

            const handleLabels = () => labels = labeledFaceDescriptors.map((faceDescriptor) => faceapi.LabeledFaceDescriptors.fromJSON(faceDescriptor));

            video.addEventListener('play', () => {
                canvas = faceapi.createCanvasFromMedia(video);
                context = canvas.getContext('2d');
                canvasSize = { width: video.videoWidth, height: video.videoHeight };
                // I was using TinyFaceDetector before but I changed to SSD because I saw you commenting in another issue that is better
                // detectorOptions = new faceapi.TinyFaceDetectorOptions({ inputSize: 160, scoreThreshold: 0.6 });
                detectorOptions = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.2, maxResults: 5 });

                handleLabels();

                faceapi.matchDimensions(canvas, canvasSize);
                document.body.appendChild(canvas);

                detectFaces();
            });

            const detectFaces = async () => {
                detections = await faceapi.detectAllFaces(video, detectorOptions).withFaceLandmarks().withFaceDescriptors();
                // I removed resizeResults because I didn't see any difference when using it
                // const resizedDetections = faceapi.resizeResults(detections, canvasSize);
                const faceMatcher = new faceapi.FaceMatcher(labels, 0.5);
                const detectionResults = detections.map((detection) => faceMatcher.findBestMatch(detection.descriptor));

                context.clearRect(0, 0, canvasSize.width, canvasSize.height);

                detectionResults.forEach((result, index) => {
                    const { box } = detections[index].detection;
                    let { label } = result;

                    // JSON.parse below because I pass a object {label: 'firstname', id: 1} to labeledFaceDescriptors
                    label = label === 'unknown' ? { label: 'Not registered' } : JSON.parse(label);

                    new faceapi.draw.DrawBox(box, {
                        label: label.label,
                        boxColor: label.label === 'Not registered' ? '#eb445a' : '#009688',
                        lineWidth: 5,
                        drawLabelOptions: {
                            fontSize: 30
                        }
                    }).draw(canvas);
                });

                requestAnimationFrame(() => detectFaces());
            };

            const initCamera = async () => {
                try {
                    const stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: true });

                    video.srcObject = stream;
                } catch (error) {
                    alert('error when turning on the camera');
                }
            };

            const initModels = async () => {
                try {
                    await Promise.all([
                        faceapi.nets.ssdMobilenetv1.loadFromUri(weightsDir),
                        // faceapi.nets.tinyFaceDetector.loadFromUri(weightsDir),
                        faceapi.nets.faceLandmark68Net.loadFromUri(weightsDir),
                        faceapi.nets.faceRecognitionNet.loadFromUri(weightsDir)
                    ]);

                    // Requesting the already registered faces that I keep saved in a faces.json file
                    sendToMain('loadFaces');
                } catch (error) {
                    alert('error while loading weigths');
                }
            };

            // Detecting the request coming from another page to register a new face
            ipcRenderer.on('registerFace', (event, data) => {
                if (!detections.length) {
                    alert('no face was detected');

                    return;
                }

                if (detections.length > 1) {
                    alert('more than one face was detected');

                    return;
                }

                const labeledFaceDescriptor = new faceapi.LabeledFaceDescriptors(JSON.stringify({ label: data.firstname, id: data.id }), [detections[0].descriptor]);

                // Sending the face to main.js save in the faces.json file through fs.writeFile()
                ipcRenderer.send('addFace', labeledFaceDescriptor.toJSON());
            });

            const sendToMain = (event, data = null) => {
                ipcRenderer.send(event, data);
            }

            initModels();

            // Receiving the already registered faces that I keep saved in a faces.json file
            ipcRenderer.on('facesLoaded', (event, faces) => {
                labeledFaceDescriptors = faces;

                initCamera();
            });

            // Adding the newly registered face to the array (this event is only called if fs.writeFile() is successful)
            ipcRenderer.on('faceAdded', (event, face) => {
                labeledFaceDescriptors.push(face);

                handleLabels();
            });
        });
    </script>
</body>

</html>

Expected Behavior

Environment

OS: Windows 7 Home Basic (64 Bits) CPU: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz 3.10GHz RAM: 8,00 GB GPU: NVIDIA GeForce 9800 GT Electron version: 17.4.11 navigator.appVersion (inside a Electron BrowserWindow): '5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Urkout/1.0.0 Chrome/98.0.4758.141 Electron/17.4.11 Safari/537.36' chrome://gpu (inside a Electron BrowserWindow): gist

Additional

IgorSamer commented 1 year ago

P.S.: I just discovered your project @vladmandic/human and I'm already looking to migrate to it, as I saw you comment that the face-api has an old architecture. From the demo I could already see your excellent work and I'm already looking forward to testing it in my project! Congratulations!

vladmandic commented 1 year ago

The first error is:

Could not get context for WebGL version 2

Since you're running in Electron environment and Electron internally packages Chrome browser, that basically means that Chrome is really running in somewhat degraded state. Now, if it couldn't get WebGL context at all, it would say that. But the fact is that it fails on WebGL 2 and continues working in WebGL 1. And for Chrome that is already really bad.

Now onto second error:

Failed to link vertex and fragment shaders

That can happen due to number of reasons, but most common (and I mean 99%) is that browser-allocated GPU memory gets exhausted. That can be due to memory leak or just the fact that browser access to GPU memory is really low - which would make sense if its running in degraded WebGL 1 mode to start with.

Btw, same issue can happen if you're running on a mobile device with very limited memory (e.g. mobile phone with 1-2GB total memory) - and those really low-end devices are not suitable for ML tasks to start with. But that's not really the case here.

Anyhow, first, I would test browser within electron by navigating to:

And after that, its really up to troubleshooting Electron configuration to get WebGL 2 working - once its working, I'm pretty sure FaceAPI will work as well.

And to test from JS without any libraries, its really simple

   const canvas = document.createElement('canvas');
   const ctx = canvas.getContext('webgl2');
   console.log(ctx);

Alternatively, instead of WebGL, you can use WASM backend for ML computations - its more portable, but definitely slower.

Re: I just discovered your project @vladmandic/human and I'm already looking to migrate to i

Thanks!

IgorSamer commented 1 year ago

Navigating to https://webglreport.com/?v=2 I got the following response:

This browser supports WebGL 2, but it is disabled or unavailable.

After numerous unsuccessful attempts, I found that what was blocking WebGL 2 was the Use hardware acceleration when available option (chrome://settings -> System tab), so I disabled that option in Electron using the app.disableHardwareAcceleration() and it worked. But now I got this message on the console:

[.WebGL-58ABD600]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels

And face detection drawings on canvas made by face-api became incredibly slow :/

I believe it really has to do with my GPU (NVIDIA GeForce 9800 GT) being a older model.

As I said, I really intend to migrate from face-api to @vladmandic/human but as this WebGL 2 error occurs with TensorFlow I think the same behavior will happen with human.

In this case, can you tell me what is the recommended minimum (GPU and/or other hardware) for me to pass as a requirement for my customers who want real-time facial recognition?

And thanks for the quick response!

vladmandic commented 1 year ago

This browser supports WebGL 2, but it is disabled or unavailable.

Are you getting this only when running inside Electron or using normal Chrome as well?

I believe it really has to do with my GPU (NVIDIA GeForce 9800 GT) being a older model.

That is quite an old card. And its not about minimum HW requirements in the case of desktops, its more about compatibility. I can only guess, but one possibility is old nVidia drivers running in DX10 mode, so Chrome refuses to run enable WebGL2. I'd definitely try upgrading nVidia drivers to latest versions that is still compatible with that card.

I found that what was blocking WebGL 2 was the Use hardware acceleration when available option (chrome://settings -> System tab), so I disabled that option in Electron using the app.disableHardwareAcceleration()

If you disable HW acceleration, then entire WebGL is 100% pointless - it cannot work.
If you want to run without GPU, you should use WASM backend instead.

As I said, I really intend to migrate from face-api to @vladmandic/human but as this WebGL 2 error occurs with TensorFlow I think the same behavior will happen with human.

Yes, it will be the same.

So, to summarize - two options: a) Try to upgrade your drivers to so Chrome detects sufficient system capabilities and enables WebGL2. This is recommended as GPU will always be more efficient than CPU. b) Use WASM backend instead of WebGL - that way, TFJS will use CPU instead of GPU (same applies for FaceAPI and Human,its just its actually easier to do in Human, its just a config flag). Anyhow, check demo/index.js for example as it sets either WebGL or WASM backend depending on param.

IgorSamer commented 1 year ago

Are you getting this only when running inside Electron or using normal Chrome as well?

In both cases. Unfortunately I updated to the latest drivers available and the problem remains the same.

Running OpenGL Extensions Viewer the report informed me that the OpenGL version is 3.3 and at https://get.webgl.org/get-a-webgl-implementation/ the information is that WebGL 2 requires OpenGL ES 3.0 support, so I really don't understand why it doesn't work.

I just tested the Human demo with the WASM backend and it worked as expected, just slower really. So I believe I will do a compatibility check for WebGL 2 and if it is not available it will fallback to WASM backend.

Anyway, I still have a big doubt: using WASM is there the possibility of causing a lot of slowdown on low-end/mid-end computers because it uses CPU or would it be something more imperceptible?

vladmandic commented 1 year ago

Running OpenGL Extensions Viewer the report informed me that the OpenGL version is 3.3 and at https://get.webgl.org/get-a-webgl-implementation/ the information is that WebGL 2 requires OpenGL ES 3.0 support, so I really don't understand why it doesn't work.

WebGL2 as a standard requires at a minimum OpelGLES3, but there are bunch of optional GL extensions that Chrome likely wants to have that are not present, so it auto-disables WebGL2.

Purely out of curiosity, what would Firefox say (if you want to test) ?

Anyway, I still have a big doubt: using WASM is there the possibility of causing a lot of slowdown on low-end/mid-end computers because it uses CPU or would it be something more imperceptible?

Quite the opposite - I'd expect that low-end systems do not have a dedicated GPU anyhow, so there is no big penalty of using WASM. But if you have a good GPU, then difference becomes quite a lot in favor of WebGL as no CPU can compete with a modern GPU.

Also, take a look at some of my notes at https://github.com/vladmandic/human/wiki/Backends

IgorSamer commented 1 year ago

Purely out of curiosity, what would Firefox say (if you want to test) ?

Just the same result (This browser supports WebGL 2, but it is disabled or unavailable.)

Quite the opposite - I'd expect that low-end systems do not have a dedicated GPU anyhow, so there is no big penalty of using WASM. But if you have a good GPU, then difference becomes quite a lot in favor of WebGL as no CPU can compete with a modern GPU.

Nice! I will migrate to Human and do the tests. If necessary, I meet you in the correct repository :)

Thanks again for your attention!

IgorSamer commented 1 year ago

I just updated Electron to the latest version (21.1.1) and now the Failed to link vertex and fragment shaders error don't appears anymore. Only Could not get context for WebGL version 2 (console) and This browser supports WebGL 2, but it is disabled or unavailable. (WebGL Report) remains and the face recognition works (but slower as you explained before). Anyway, in these cases I will always use WASM.

vladmandic commented 1 year ago

sounds good.
i'll close the issue as its not a code thing, but feel free to continue on this thread if you have any questions.

vladmandic commented 1 year ago

a bit more research, seems like 9800GT supports DX10, but Chrome's rendering engine (Angle) uses DX11 by default which causes it to disable WebGL2.
You can try forcing backend for Chrome's renderer and see how that changes situation - go to chrome://flags/#use-angle

IgorSamer commented 1 year ago

I changed the ANGLE from Default to OpenGL and then when I restarted Chrome it got a black screen. To return to the default configuration, I was navigating through the tab and using my laptop as a reference. After a few attempts, I managed to revert the configuration and when I restarted Chrome it returned to normal.

I also tested with the options D3D11, D3D9 and D3D11on12 and this was the final result:

Default: 'This browser supports WebGL 2, but it is disabled' OpenGL: Blackscreen D3D11: Same as default D3D9: Same as default D3D11on12: 'This browser supports WebGL 2'

Only D3D11on12 seems to work, but going to https://vladmandic.github.io/human/demo/typescript/index.html and looking to console, I got:

Could not get context for WebGL version 2 Could not get context for WebGL version 1 Initialization of webgl backend failed

And several other errors, in addition to the page content not rendering.

vladmandic commented 1 year ago

ouch,

FYI, D11on12 exists only because of new Intel Arc GPUs that don't have native DX11 support so its emulated using DX12 - it should never be used by anyone else. but i was hoping that D9 might work for you.