jakowenko / double-take

Unified UI and API for processing and training images for facial recognition.
https://hub.docker.com/r/jakowenko/double-take
MIT License
1.2k stars 94 forks source link

[FEAT] Coral AI as a detector #121

Open tonyhardcode opened 2 years ago

tonyhardcode commented 2 years ago

Would it be possible to get Coral AI as a compatible detector on Double Take? Coral is trainable and this project seems to be able to do it https://github.com/robmarkcole/coral-pi-rest-server

jonwilliams84 commented 2 years ago

+1 for this...however I assume this would need support by the detector (e.g. deepstack/compreface) as opposed to double-take because unless my understanding is incorrect, double-take is purely an interface to the api of the given detectors.

jakowenko commented 2 years ago

Would it be possible to get Coral AI as a compatible detector on Double Take? Coral is trainable and this project seems to be able to do it https://github.com/robmarkcole/coral-pi-rest-server

Love this idea! I wasn't aware of that rest server project. It uses the Python package pycoral to talk to the Coral. I'd have to think a bit about how to make this work since there would be no way to integrate it directly into the code base with Double Take being in NodeJS. I don't think there's any Node packages either for the Google Coral.

What if I run the coral-pi-rest-server project in the same container as Double Take. Then Double Take can talk directly to the pi rest server which is then talking to the Coral? Seems like it should be possible.

+1 for this...however I assume this would need support by the detector (e.g. deepstack/compreface) as opposed to double-take because unless my understanding is incorrect, double-take is purely an interface to the api of the given detectors.

As long as Double Take gets some sort of MQTT or API trigger, I could just process the results through the Coral and display the predictions on the matches page. Maybe having another option to then pass it to facial recognition if the prediction returns a person?

jakowenko commented 2 years ago

I spent a few hours trying to get this to run on my Pi and haven't had any luck yet. Have either of you had success running it with a Coral USB Accelerator on a Pi?

tonyhardcode commented 2 years ago

I spent a few hours trying to get this to run on my Pi and haven't had any luck yet. Have either of you had success running it with a Coral USB Accelerator on a Pi?

I do not have a Pi device. I've been using the Coral Accelerator in an m.2 slot on a mITX board. I did come across this blog post about getting a Coral USB Accelerator working on Pi. Hopefully it helps. https://blogs.sap.com/2020/02/11/containerizing-a-tensorflow-lite-edge-tpu-ml-application-with-hardware-access-on-raspbian/

jakowenko commented 2 years ago

Screen Shot 2021-10-13 at 11 09 40 PM

Progress! I was able to at least get the coral rest api running on it's own and then passing an image to it.

I'm thinking to support this it would probably make sense for it to be an external detector that you would have to setup just like any of the others. It'll also keep all of the dependencies needed for it out of the base Docker image, since not everybody will be using it.

Then in the config you could define it like so.

detectors:
  coral:
    url: http://localhost:4000
rsteckler commented 2 years ago

This will be awesome - The box already has a coral on it for frigate, so leveraging it for recognition would save the CPU.

ozett commented 2 years ago

why dont use the coral ressources on frigate? frigate explicitly uses coral. the more - the better. why not use a little bit of already existing corals from there?

ping-localhost commented 2 years ago

I use a Coral for Frigate as well, so it being useable for Double Take as well would be amazing.

jakowenko commented 2 years ago

I plan to prioritize this feature soon and include it in one of the next few releases. I'm also working on #145, which will be some of the groundwork for this. Basically the ability to pull in other labels/objects and not just faces and display them on the UI.

I don't believe a single Coral can be used by both Frigate and the project referenced in this feature request though. You would probably need one for each.

ozett commented 2 years ago

I don't believe a single Coral can be used by both Frigate and the project referenced in this feature request though. You would probably need one for each.

you could be right that each coral is blocked by the loaded model. https://github.com/blakeblackshear/frigate/blob/master/frigate/edgetpu.py#L47 cannot see how all frigate-corals are deteced and used (round-robin ?)

but with blocked frigate-corals i would install the few needed corals directly in the device where doubl-take is running. rare use-cases to build up a remote double-take edge-device as a coral-farm? also its not that easy to get many corals running in parallel on one hardware.

only new mac-mini with M1 seems to provide 16x corals... https://github.com/blakeblackshear/frigate/issues/2078#issuecomment-950875092

ozett commented 2 years ago

maybe i misunderstood, but doubl-take uses "detectores" like compreface for face-detection. so the detector must have an AI-Hardware for face-rec, right? but compreface dont need it, as mentioned in the docs ,

i found it interesting that it is possible to use GPU (as AI-Hardware) and more precise-facerec models with compreface. for that you have to start a "customBuild"docker of compreface. anybody tried this and has information to get this going?

image

image

ozett commented 2 years ago

may you want to check out if other models from compreface-engine "insightface" could run on a coral (or other AI-Hardware like jetson nano) as a AI-REST-Server?

-> https://github.com/deepinsight/insightface/tree/master/model_zoo

ozett commented 2 years ago

i looked into the pi-rest server. seems a good option to offload tensor-detection on networked devices. at least they need to run tensorflow. maybe on gpu or on a small board with coral(s).
difficult to find compatibel cheap hardware for corals within a small footprint https://github.com/google-coral/edgetpu/issues/256#issuecomment-948127807

tonyhardcode commented 2 years ago

Has there been anymore development for this feature?

isaacolsen94 commented 2 years ago

Also curious of there has been any updates on adding the coral api? Would love to get this going with a spare Coral device!

maxstaHA commented 2 years ago

Hi @jakowenko, I'm also very interested in the "Frigate/Double-take/Coral"-combination for a Home Assistant setup :-)

Daniel-dev22 commented 1 year ago

Hi @jakowenko, I'm also very interested in the "Frigate/Double-take/Coral"-combination for a Home Assistant setup :-)

Also interested in any progress on this? @jakowenko

zac9 commented 1 year ago

Last night I stubbled across this thread and a drop in replacement for Deepstack. My home assistant and frigate/coral are on different boxes so tried to compile the dockerfile but failed. Its probably missing something from Home Assistant but its my first dockerfile.

HASS addon implementing Google Coral TPU access through REST API, compatible with Deepstack Object detection https://github.com/grinco/HASS-coral-rest-api/

https://www.reddit.com/r/homeassistant/comments/sftgif/ive_created_a_hassio_addon_for_using_google_coral/

strasharo commented 1 year ago

So I built a docker container for the coral rest server using this dockerfile https://github.com/sstratoti/docker-coral-rest-server But when I tried it with Double Take I'm getting this in the logs of the rest server:

INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:14:01] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:14:32,816 INFO werkzeug Thread-107 : 172.17.0.1 - - [02/Mar/2023 11:14:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:14:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:15:02,840 INFO werkzeug Thread-108 : 172.17.0.1 - - [02/Mar/2023 11:15:02] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:15:02] "POST /v1/vision/INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:14:01] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:14:32,816 INFO werkzeug Thread-107 : 172.17.0.1 - - [02/Mar/2023 11:14:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:14:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:15:02,840 INFO werkzeug Thread-108 : 172.17.0.1 - - [02/Mar/2023 11:15:02] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:15:02] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:15:32,886 INFO werkzeug Thread-109 : 172.17.0.1 - - [02/Mar/2023 11:15:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:15:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
/recognize HTTP/1.1" 404 -
2023-03-02 11:15:32,886 INFO werkzeug Thread-109 : 172.17.0.1 - - [02/Mar/2023 11:15:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:15:32] "POST /v1/vision/face/recognize HTTP/1.1" 404 -

When I manually posted a file using curl I got this response:

curl -X POST -F image=@face.jpg 'http://172.17.0.1:5001/v1/vision/detection'
`{"predictions":[{"confidence":0.93359375,"label":"person","x_max":247,"x_min":13,"y_max":349,"y_min":69},{"confidence":0.9140625,"label":"person","x_max":509,"x_min":223,"y_max":352,"y_min":15},{"confidence":0.671875,"label":"person","x_max":507,"x_min":425,"y_max":228,"y_min":95},{"confidence":0.5,"label":"person","x_max":122,"x_min":2,"y_max":292,"y_min":88},{"confidence":0.44140625,"label":"person","x_max":512,"x_min":423,"y_max":340,"y_min":96}],"success":true}`

Also when I tried to register:

INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:22:00] "POST /v1/vision/face/recognize HTTP/1.1" 404 -
2023-03-02 11:23:01,716 INFO werkzeug Thread-112 : 172.17.0.1 - - [02/Mar/2023 11:23:01] "POST /v1/vision/face/register HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:23:01] "POST /v1/vision/face/register HTTP/1.1" 404 -
2023-03-02 11:23:11,560 INFO werkzeug Thread-113 : 172.17.0.1 - - [02/Mar/2023 11:23:11] "POST /v1/vision/face/delete HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:23:11] "POST /v1/vision/face/delete HTTP/1.1" 404 -
2023-03-02 11:23:11,618 INFO werkzeug Thread-114 : 172.17.0.1 - - [02/Mar/2023 11:23:11] "POST /v1/vision/face/register HTTP/1.1" 404 -
INFO:werkzeug:172.17.0.1 - - [02/Mar/2023 11:23:11] "POST /v1/vision/face/register HTTP/1.1" 404 -
2023-03-02 11:23:42,642 INFO werkzeug Thread-115 : 172.17.0.1 - - [02/Mar/2023 11:23:42] "POST /v1/vision/face/recognize HTTP/1.1" 404 -

Did I miss something from the setup?

strasharo commented 1 year ago

Nah, just realized most of it is not implemented in the rest server yet. :(

arne182 commented 1 year ago

Is there any progress on this feature?

Twanislas commented 9 months ago

I'm interested too, especially coupled with an M2 coral which has two accelerator chips, so I could dedicate one to Frigate and the other one for face matching, this would be super awesome !

keyaertc commented 7 months ago

Hi All,

Any news about this feature ? Thank you