jakowenko / double-take

Unified UI and API for processing and training images for facial recognition.
https://hub.docker.com/r/jakowenko/double-take
MIT License
1.19k stars 93 forks source link

[FEAT] Multiple Deepstack Detectors #184

Open dannytsang opened 2 years ago

dannytsang commented 2 years ago

Is your feature request related to a problem? Please describe. I would like to add additional DeepStack instances and allow Double-Take to use them.

Describe the solution you'd like The ability to use multiple instances of DeepStack (as well as other detectors). The benefits with this include:

  1. Fail-over scenario should a detector not be available (e.g maintenance or timeout).
  2. Test out different custom models which reside on different instances.
  3. Test out different hardware. E.g. If each instance have different hardware.

Training any unknown images should give an option to select which DeepStack instances to train the new images on. More advanced options would be to train each instances in turn to give maximum availability.

Additional context I am running on DeepStack on a Jetson Nano and training images takes forever. At the moment (2022), it be more affordable to get 4 Jetson Nano's v.s. 1 GTX 1650. I'm not sure if a GTX 1650 would out perform a Jetson Nano however I find the performance to be acceptable on the Nano and the only issue I have is when I train models with new images.

The concept would hopefully allow multiple Coral AI chips as well #121.

eldadh commented 2 years ago

I am running on DeepStack on a Jetson Nano and training images takes forever. At the moment (2022), it be more affordable to get 4 Jetson Nano's v.s. 1 GTX 1650. I'm not sure if a GTX 1650 would out perform a Jetson Nano however I find the performance to be acceptable on the Nano and the only issue I have is when I train models with new images.

can you please explain how you got deepstack to run on jetson nano 2g ram with face recognition?

dannytsang commented 2 years ago

@eldadh I'm using the 4gb version :smile:

LordNex commented 2 years ago

I'm doing the same thing with the 4 gig Nano. But even then the only way I could get a half-decent performance was to put Frigate on an 8gig RPi4 with a Coral TPU. I just set up CompreFace inside Home Assistant as an add-on to have a second detector. But it's still pretty slow.

You could stack several Nano's together in a Kubernetes cluster and spread the DeepStack container load across them. But one of the other issues is that the developer of DeepStack has stated that since the Nano is near EOL\EOS, he doesn't plan to continue to support it past where it is now.

Honestly, I'd like to see a different NVR than Frigate. Even with a TPU I have a hard time with the decoding and being able to display the feeds in real-time in Home Assistant, even using the Frigate card. I like the features or having the thumbnails and everything but it's usually pretty laggy. I've tried messing with the input and output args but it seems hit or miss. One issue may be due to some of the cameras being wifi, but this is also occurring with gigabit-connected POE cameras. But I'll be VLANing out my network here in a day or so when I get my Firewalla Gold. But, any chance of supporting a different NVR like Blue Iris or something better than Frigate?

ozett commented 2 years ago

. I'm not sure if a GTX 1650 would out perform a Jetson Nano

i am running deepstack with gtx1660 and nvidia-gpu-container. deepstack takes almost forever ... dont think thats a hardwarepoblem.. the problem seems the suite itself...

compreface had better results, but both together slow down the whole doubletake..

LordNex commented 2 years ago

My Nano does pretty quick. Currently I have Frigate in a RPi4 8gig with a 64 bit version of Raspian, docker, and docker-compose. I have a Coral TPU assigned to it, I then have the 4gig Jetson Nano running DeepStack in docker-compose. Then I have a CompreFace along with Double Take installed inside my Home Assistant install

Using 2 detectors helps, but it just runs the detection twice. I wish DT could parse the info between the 2 detectors for a combined threashold value that we can base automations from. To do it now takes some the plating.

On Jan 27, 2022, at 12:16 PM, toz @.***> wrote:



. I'm not sure if a GTX 1650 would out perform a Jetson Nano

i am running deepstack with gtx1660 and nvidia-gpu-container. deepstack takes almost forever ... dont think thats a hardwarepoblem.. the problem seems the suite itself...

compreface had better results, but both together slow down the whole doubletake..

— Reply to this email directly, view it on GitHubhttps://github.com/jakowenko/double-take/issues/184#issuecomment-1023511076, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJBKA5X4SKI3JCBNKCLUYGDWRANCNFSM5LFIDKAQ. You are receiving this because you commented.Message ID: @.***>

ozett commented 2 years ago

@LordNex CORAL TPU are only for frigate? any TPU-Support configured for deepstack or CompreFace?

(i have trouble to get Deeptstack-GPU running with Nvidia-Container, (runs fine on CPU) will test some things today)

LordNex commented 2 years ago

A TPU wouldn’t really help as much as a GPU for recognition. The TPU is for fast object detection, and that is the very quickly passed to the Jetson Nano for DeepStack and an i5 laptop running Home Assistant, DoubleTake, and CompreFace along with a slew of other things

On Jan 28, 2022, at 12:52 AM, toz @.***> wrote:



@LordNexhttps://github.com/LordNex CORAL TPU are only for frigate? any TPU-Support configured for deepstack or CompreFace?

(i have trouble to get Deeptstack-GPU running with Nvidia-Container, (runs fine on CPU) will test some things today)

— Reply to this email directly, view it on GitHubhttps://github.com/jakowenko/double-take/issues/184#issuecomment-1023935799, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJDQX3YLUBN4TXWXTC3UYI4KJANCNFSM5LFIDKAQ. You are receiving this because you were mentioned.Message ID: @.***>

jakowenko commented 2 years ago

Hey @dannytsang thanks for the request. I'm getting back into the swing of things and will start to consider how to best handle this. Hope things are still working well for you.

guarddog13 commented 2 years ago

My Nano does pretty quick. Currently I have Frigate in a RPi4 8gig with a 64 bit version of Raspian, docker, and docker-compose. I have a Coral TPU assigned to it, I then have the 4gig Jetson Nano running DeepStack in docker-compose. Then I have a CompreFace along with Double Take installed inside my Home Assistant install Using 2 detectors helps, but it just runs the detection twice. I wish DT could parse the info between the 2 detectors for a combined threashold value that we can base automations from. To do it now takes some the plating. On Jan 27, 2022, at 12:16 PM, toz @.> wrote:  . I'm not sure if a GTX 1650 would out perform a Jetson Nano i am running deepstack with gtx1660 and nvidia-gpu-container. deepstack takes almost forever ... dont think thats a hardwarepoblem.. the problem seems the suite itself... compreface had better results, but both together slow down the whole doubletake.. — Reply to this email directly, view it on GitHub<#184 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJBKA5X4SKI3JCBNKCLUYGDWRANCNFSM5LFIDKAQ. You are receiving this because you commented.Message ID: @.>

Are you able to get deepstack gpu to work with double take? If I try to train it with deepstack:gpu I'm getting a 500 error code...it works fine with the regular deepstack.

guarddog13 commented 2 years ago

I did it as a docker run command. I have both my frigate and compreface running on my gpu. The cpu version crashes my RAM periodically and the gpu version gives a 500 error with double take..

I currently have Double take in HAAS, compreface and frigate are running on my laptop with ubuntu thru the GPU. Works nicely but i really do like Deepstack when it's working right.

guarddog13 commented 2 years ago

Frigate GPU with Compreface GPU is near instant as we walk up to the door. I dont think I need a TPU using my GPU. Running it this way takes a lot of load off my CPU and is speedy to process.

guarddog13 commented 2 years ago

Now I wonder what I'm doing wrong for Deepstack:GPU to fail. I like Compreface but miss Deepstack and CF together.

guarddog13 commented 2 years ago

I always struggle with docker-compose unless I have a template lol. It took forever for me to try to get Frigate setup with compose and I ended up making it all one line to run it as a run command because docker-compose wasn't recognizing the nvidia container toolkit, yet docker run was... go figure... they must be two different programs?

I had the same issue with Compreface which caused me to reinstall docker which finally got Compreface up. I'm sure I could get Frigate up on a compose now too but it's been running flawlessly and if I want to change anything I adjust the config and restart lol. I wouldn't know where to start with a deepstack compose file as I've never built one.. I'm still learning and use a lot of tutorials and failures to get to where I want to be and the only way I've found to run deepstack is with a run command.

LordNex commented 2 years ago

Yes there’s a bit of a learning curve there. When I dig mine out I’ll see if I still have my docker compose files for you although mine will obviously be different hardware. Also, make sure your at least running the base of Ubuntu 20. 18 has some issue with docker and docker compose which might be when your bumping into. But there are plenty of resources online that help. I’m pretty frequent on Reddit.

On Jul 5, 2022, at 10:12 PM, guarddog13 @.***> wrote:



I always struggle with docker-compose unless I have a template lol. It took forever for me to try to get Frigate setup with compose and I ended up making it all one line to run it as a run command because docker-compose wasn't recognizing the nvidia container toolkit.

I had the same issue with Compreface which caused me to reinstall docker which finally got Compreface up. I'm sure I could get Frigate up on a compose now too but it's been running flawlessly and if I want to change anything I adjust the config and restart lol. I wouldn't know where to start with a deepstack compose file as I've never built one.. I'm still learning and use a lot of tutorials and failures to get to where I want to be and the only way I've found to run deepstack is with a run command.

— Reply to this email directly, view it on GitHubhttps://github.com/jakowenko/double-take/issues/184#issuecomment-1175730208, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJBWWQJVUIIQVZMIOEDVST2QZANCNFSM5LFIDKAQ. You are receiving this because you were mentioned.Message ID: @.***>

guarddog13 commented 2 years ago

Thank you so much! It would be greatly appreciated especially if I can get it running on the GPU with Frigate and Compreface. Even with those both on that my machine eats up RAM so I have a 16gb stick coming today to add to the empty slot alongside the 8gb already there. 24gb should give my machine a lot more head room so I can actually use the laptop for other things.

I have a question that's sort-of off topic. If i want to take my laptop with me is there a way to keep it visible on the network with the same IP? I know I can use a VPN to get into my network but obviously the VPN machine is on a virtual network with an ip address that doesn't match my DHCP server. I want to be able to keep this computer doing the NVR/Facial recognition work but still be allowed the freedom of a portable laptop...I thought about seeing if mDNS would work but I can't even get to homeassistant.local on my network and my router is multicast capable.

guarddog13 commented 2 years ago

I see you respond via email and not sure if you see edits so please check as I added a paragraph with a question for you. Thank you!

guarddog13 commented 2 years ago

I thought I figured it out and glad I have more RAM coming lol. Increased my swap space and it quit crashing. I also now have a working docker-compose.

Now i have a new problem.. when i try to train files in double take I'm getting a 400 no face detected error. I have progress, at least.

guarddog13 commented 2 years ago

I have done both altering my detect settings in DT and rebuilding/restarting DT. I can get DT to train fine with the CPU version but once I switch to GPU I'm now getting error 400 with no face detected.

guarddog13 commented 2 years ago

I'm running it directly on my own Linux Laptop. It doesn't matter where I try to post for deepstack I'm getting the same error. The errors say that its timing out.

guarddog13 commented 2 years ago

DeepstackException: Timeout connecting to Deepstack, the current timeout is 20 seconds, try increasing this value Traceback: File "/usr/local/lib/python3.8/site-packages/streamlit/script_runner.py", line 332, in _run_script exec(code, module.dict) File "/app/deepstack-ui.py", line 90, in predictions = process_image_face(pil_image, dsface) File "/usr/local/lib/python3.8/site-packages/streamlit/caching.py", line 604, in wrapped_func return get_or_create_cached_value() File "/usr/local/lib/python3.8/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/app/deepstack-ui.py", line 57, in process_image_face predictions = dsface.recognize(image_bytes) File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 297, in recognize response = process_image( File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 124, in process_image response = post_image(url=url, image_bytes=image_bytes, timeout=timeout, data=data) File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 104, in post_image raise DeepstackException(''

Anytime I try to cat into the log it doesn't open cat. But I was able to attempt to post images through deepstack-ui and this I believe is from the log. Hope it helps.

guarddog13 commented 2 years ago

So now I've gotten beyond the above errors by restarting docker after doing the compose up file. I found someone on the deepstack forums. Now, though, I'm back to getting the 400 error but am at least able to send some of these logs to see if anyone sees anything I don't because this all looks greek to me.

DeepstackException: Error from Deepstack request, status code: 400 Traceback: File "/app/deepstack-ui.py", line 139, in response = dsface.register(face_name, image_bytes_register) File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 277, in register response = process_image( File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 130, in process_image raise DeepstackException(

LordNex commented 2 years ago

Ok finally got back into the swing of things and got everything back working. Mags there been any more work done on using Multiple DeepStack detectors? I prefer my Jetson Nano but wouldn't mind using the CPU version in Home Assistant as a backup.

@guarddog13 Where are you still having issues? Double Take, DeepStack, or Frigate? Or is it getting them all to talk well together?

LordNex commented 2 years ago

So now I've gotten beyond the above errors by restarting docker after doing the compose up file. I found someone on the deepstack forums. Now, though, I'm back to getting the 400 error but am at least able to send some of these logs to see if anyone sees anything I don't because this all looks greek to me.

DeepstackException: Error from Deepstack request, status code: 400

Traceback:

File "/app/deepstack-ui.py", line 139, in

response = dsface.register(face_name, image_bytes_register)

File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 277, in register

response = process_image(

File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 130, in process_image

raise DeepstackException(

Maybe this was already answered, what are you running DeepStack on? Just a bare metal install of Linux with a GPU or something custom like the Jetson?

guarddog13 commented 2 years ago

Its deepstack with my GPU. I run it on my laptop with the Nvidia GTX 1660 ti mobile. I removed windows from the pc and put Ubuntu desktop on it..

I thought I had it figured out because I couldn't get anything on the GPU to work at one point. I uninstalled and reloaded all drivers. This allowed me to properly get the GPU working again but I can't get past deepstack not working on the GPU.

It runs beautifully on the CPU, with 6 cores (12 threads). If i put it on the GPU nvidia-smi claims it is up and running. I can get to it from localhost:8010 and get a success message. If I send a pic to it...python, the deepstack ui docker app, double take all give no face errors. The ui app doesn't show the typical photos that come up when you login.

I've come to the assumption that my GPU and Deepstack don't play nice together.

(Agent DVR works quite well on the GPU. So currently I'm running it that way. It sits on the GPU and deepstack is on high mode with the CPU thread count set in the docker compose.) Deepstacks facial rec isn't as accurate as I would expect (because using CPU or no?) but the person detection is about 98-99% perfect.

I see some Corals are starting to ship and using my laptop for this was always a bridge until i could get my hands on one for less than ebay prices lol as I by no means want my laptop stuck in the house.

I have a VPN but haven't figured out any way to get home assistant on a separate raspi to see agent dvr/deepstack when I'm using the VPN off the local network.

LordNex commented 2 years ago

Finally got mine up and pretty much running properly again. Still more cameras to install so I might have to upgrade hardware. But I went and redid my entire network and picked up Starlink. Also picked up a 40 core 256gig Dell PowerEdge R620. So that's now running Home Assistant, DoubtTake,CompreFace, and Facebox. I have DeepStack still running on my 4gig Nano, and Frigate 0.11.0RC1 on an 8gig RPi4 with a Coral TPU in docker-compose. Everything seems to be working ok but I'm still working out getting DT to play nice with 3 detectors. Still can't use 2 DeepStacks but you can use one of each at least. Seems to me most of the time CompreFace wins as having the correct response but there are other times it shits the bed totally and somehow DeepStack pull out a win. Facebox isn't very good unless the image is super clear of obviously a face. Doesn't do well with sides, look always, or long distance. But it's slowly getting better. Funniest part is I have my rack all lit up with LEDs and toys and crap and a Guy Fauks mask, I keep the TPU behind the mask and that bright little sucker is bright enough to light up that entire mask every time it goes to process. Looks pretty wicked at night.

guarddog13 commented 2 years ago

It was the recent version of deepstack. Apparently the upgrade broke gpu on certain cards. So i rolled it back and working well now.

LordNex commented 1 year ago

It was the recent version of deepstack. Apparently the upgrade broke gpu on certain cards. So i rolled it back and working well now.

I quit using DeepStack as I believe the maintainer is finished with it. Check out CompreFace. Works way better with just CPU and also has plugins for age, gender, landmarks, and masks

guarddog13 commented 1 year ago

It was the recent version of deepstack. Apparently the upgrade broke gpu on certain cards. So i rolled it back and working well now.

I quit using DeepStack as I believe the maintainer is finished with it. Check out CompreFace. Works way better with just CPU and also has plugins for age, gender, landmarks, and masks

I think you're right that the dev is done. My issue is that I'm using AgentDVR with version deepstack 2201.9.1 and it's quick (50-80ms) for ppl detection and very accurate.

AgentDVR only integrates with deepstack and sense ai...I tried sense ai but its slow and not very accurate. I've tried compreface before and really do like it but don't want to switch from AgentDVR now lol.

LordNex commented 1 year ago

I prefer using Frigate with a Coral TPU for person detection that sends MQTT data to DoubleTake that then runs the facial detection. I usually get well under 100ms for the entire process. Basically I have DoubleTake and CompreFace running inside Home Assistant in one VM and a separate copy of Ubuntu Server running in another VM with Frigate running in Docker Compose. Lastly I have OpenMediaVault acting as the NAS for video storage and PhotoPrism. All of this is on a Dell PowerEdge R620 with 2 x Xeon 3ghz 10 core(20 thread) 256gig box with 4 x 1.2TB 10k SAS drives in RAID 5. It has the Coral TPU on it and I pass the USB controller hardware through VMWare ESXi 7 to the Ubuntu Server. Inference times for 4 x 2K streams is about 43ms on frigate.

Not sure what I'm going to use my Jetson Nano and RPi4 8gig for now