matiasdelellis / facerecognition

Nextcloud app that implement a basic facial recognition system.
GNU Affero General Public License v3.0
510 stars 46 forks source link

export dependencies to seperate container #690

Closed KoMa1012 closed 1 year ago

KoMa1012 commented 1 year ago

Every once in a while there is a breaking update to the nextcloud docker container which requires new dependencie versions which might even not be available. This would be prevented if there is a seperate container for the face recognition and the app in nextcloud is just handing over pictures to this container and getting back all the information it requires (faces etc..). Also, it would be possible to run the face recognition on e.g. your gaming rig at home which would speed up the recognition speed dramatically. I think requiring the dependencies on the same machine or container as nextcloud doesn't seem to be the optimal choice to me.

Create a docker container which can be used to run all the "AI jobs", something like this: https://github.com/matiasdelellis/facerecognition-external-model

Create an app which does not require all the external dependencies (pdlib etc.) which can only run on model 5 (aka external model).

And for whoever doesn't like docker, you still can install this locally.

muebau commented 1 year ago

Well this idea came up before:

https://github.com/matiasdelellis/facerecognition/issues/210 https://github.com/exadel-inc/CompreFace/issues/554 https://github.com/nextcloud/recognize/issues/680

So it is the main thing to offload the work and dependencies to some other entity (could be local or remote).

The app recognize (https://github.com/nextcloud/recognize) did a smart thing to use a JavaScript (WebAssembly) version if the underlying platform does not support the execution format.

I think it should be possible to use an interface like this to run the code in JavaScript (WebAssembly) in the browser with GPU (WebGL) acceleration (https://github.com/justadudewhohacks/face-api.js).

This could be a variant where a explicit "web runner' window could be opened to support the process with the GPU of some client. The "external model" interface would be used by multiple helpers (eg. docker container, WebGL client).

matiasdelellis commented 1 year ago

Hi, I invite you to try the external model, using the latest version released.

Open a new issue to see how we continue. 😬

KoMa1012 commented 1 year ago

@matiasdelellis, thank you! This is freaking awesome! Finally I can do the initial recognition on my Gaming setup with a proper gpu.

I’m running the first tests with your docker container and I am starting to set up my gaming PC so that it can run the clustering, thanks to your documentation this shouldn‘t be to hard.

Do you want positive feedback or only if something fails in the process?