VorlonCD / bi-aidetection

Alarm system for Blue Iris based on Artificial Intelligence.
https://ipcamtalk.com/threads/tool-tutorial-free-ai-person-detection-for-blue-iris.37330/
GNU General Public License v2.0
206 stars 45 forks source link

Future of DeepQuest and DeepStack? #65

Open AndrewHoover opened 3 years ago

AndrewHoover commented 3 years ago

@VorlonCD Just curious if anyone here has thought about the ominous absence of the deepquest folks and the future of DeepStack viability. They haven't had a release in a long time, their website hasn't had much revision in a long time and from this post, they don't seem too interested in sales requests. https://forum.deepstack.cc/t/is-deepquest-dead/431

I'm just wondering with all of these wonderful changes, how difficult would it be to pivot this work over to another AI engine?

VorlonCD commented 3 years ago

Most likely it wouldn't be that hard to use a different engine, and I do want to provide that ability at some point.

If anyone wants to research to find the best ones that would be great! Or even implement!

I took a quick look a while back.... Don't remember if any of these can run fully locally or not, but I think there is a free or low cost option for most.

Sighthound ImageAI AWS Rekognition Google AutoML

aesterling commented 3 years ago

Andrew, I agree it doesn't look very promising, but I'm still hopeful they'll release it open-sourced as promised.

From the information on their website, the two Deepstack developers, Moses and John, are brothers from Nigera and since early 2020 appear as "Software Engineers at Microsoft." It seems like their priorities have shifted, but they're both still active online and provide their contact info on the website. Not sure if there's any point to reach out and ask for an update (I've tagged them below), but they both seem very nice. :)

Developed and Maintained by Moses Olafenwa and John Olafenwa, brothers, creators of TorchFusion, Authors of Introduction to Deep Computer Vision and creators of DeepStack AI Server.

Moses Olafenwa Email: guymodscientist@gmail.com Website: http://olafenwamoses.me Twitter: @OlafenwaMoses Medium: @guymodscientist Facebook: moses.olafenwa Github: @OlafenwaMoses

John Olafenwa Email: johnolafenwa@gmail.com Website: https://john.aicommons.science Twitter: @johnolafenwa Medium: @johnolafenwa Facebook: olafenwajohn Github: @johnolafenwa

johnolafenwa commented 3 years ago

Hello @AndrewHoover @aesterling @VorlonCD

Thanks for bringing this up and it is exciting to see how DeepStack is being used. We truly haven't done much development as we would have wanted to in recent times. It took a bit of settling down for us into our new job. Apologies for the pause in recent times.

We are planning on stabilising the development and would have new releases in coming weeks with significant improvements. The project is not abandoned and a lot is coming on it soon.

Please bear with us. Thanks

aesterling commented 3 years ago

@johnolafenwa That's great news and can't wait to hear more. Hope you're doing well and thanks again for the incredible tools. Appreciate the fast response!

AndrewHoover commented 3 years ago

Excellent!!! Thank you so much for the response @johnolafenwa ! I'm not sure how much you anticipated that DeepStack would be used in the hobbyist / maker community but projects like this one that @GentlePumpkin and @VorlonCD have developed have allowed an huge step forward in automation and functionality by bridging DeepStack to our projects.

OlafenwaMoses commented 3 years ago

@VorlonCD @AndrewHoover @aesterling To add to what @johnolafenwa said, the maker and open source community has really inspired us to pursue and continually developing DeepStack to serve the community in new ways, considering the wealth of tools and ecosystem built on top of its capabilities.

These are the reasons why we will open source the project to accelerate further development as well as open the door for more applications and impact.

githubDiversity commented 3 years ago

Out of curiosity, can you guys please tell us why the delay?

Sure we all have day jobs and that is a really valid argument but I am not quite sure yet why uploading the source code to github should be a challenge.

Please illuminate me as I am here to learn, not b*tch at you.

johnolafenwa commented 3 years ago

@githubDiversity That's a good question. Sure, we could just open source the current codebase, however we have been developing a new version of DeepStack that is significantly faster and more stable than the existing version of DeepStack, it is source for this that we are releasing soon. FYI, before this weekend ends, we shall release the CPU version as it is completed, and the gpu version the next week. The codebase will be made public before November ends. In the meantime, this new releases coming up will require purchasing no activation, with all features free perpetually.

johnolafenwa commented 3 years ago

Also, we are planning a lot more than just releasing the source code. We are making significant efforts to setup a stable dev community, proper documentation and a strong ecosystem similar to what we have for projects like kubernetes.

Tinbum1 commented 3 years ago

Hi, any idea when you will be releasing the beta for the Pi? I just can't get the alpha to work. I must have installed it over 20 times on different machines with new sd cards. It reports the positions of the object back in the wrong place. If the object is at the top it moves it down and if it's near the bottom it moves it up. Left and right are ok.

1DriveHouse 20200928_004514662 DriveHouse

johnolafenwa commented 3 years ago

Hello @Tinbum1 , the current pi version has a number of issues. The beta will be docker based as we are switching to supporting DeepStack only on docker. I can't give a definite time for when this will be released, but it will out before the end of the year. This will likely be a November release.

Tinbum1 commented 3 years ago

@johnolafenwa Many thanks for the prompt reply. That's great, can't wait so I can get my power consumption down! Love the work you've done, thank you,.

johnolafenwa commented 3 years ago

Hello @VorlonCD @githubDiversity @aesterling @AndrewHoover @Tinbum1 @classObject

We have just released update to the cpu version and will follow up soon with the gpu version. run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x4-beta

The new update is so much faster and a lot more accurate.

Would love to know your feedbacks here on on the forum https://forum.deepstack.cc on this new release

I can't say enough how much this conversation here has contributed to our energy. Thank you all

balucanb commented 3 years ago

@OlafenwaMoses Would deepquestai/deepstack:latest be for running this on Docker? I am using the windows version is why I am asking. thanks.

johnolafenwa commented 3 years ago

@balucanb This is for running on Docker. Note that we are switching to docker only for all deepstack editions, the windows version has been discontinued. Note that the docker version runs on windows as well. Should you have any challenges running the docker version, please let us know.

balucanb commented 3 years ago

Thanks! I assumed that was the answer. No clue how to use Docker, looks very confusing to me! I am sure I will have questions. Will the current windows version stop working completely or is it just not being updated anymore? Thanks again.

Tinbum1 commented 3 years ago

@balucanb all the instructions are on the first page of the AITool forum thread.

classObject commented 3 years ago

@johnolafenwa Thanks for the update! I'm seeing a dramatic speed increase. My response times have gone from an average of 630ms down to an average of 230ms.

Tinbum1 commented 3 years ago

Mines gone from about 1000 to 210ms but coming back as bad request in AITool so will have to investigate that. Got http status code '400' in 241ms: Bad Request|81414|1||24 Empty string returned from HTTP post.|81415|1||24

johnolafenwa commented 3 years ago

@Tinbum1 In the previous versions of DeepStack, all requests returned 200 | Success. In this new version, we have improved error reporting. Images that are not able to be processed possibly due to sending a corrupt file or a file that is not an image will return 400 | Bad Request. I recognize this is a breaking change and existing integrations will need to take this into account.

Can you share the input you sent that returned this?

I am excited to here about the speed increase being experienced. This has been a top priority for us

Tinbum1 commented 3 years ago

@johnolafenwa Thanks for the reply, I'm afraid I only use AITool and only know a bit about computing so will have to find out how to do that.

Tinbum1 commented 3 years ago

This was one of the images.

GranaryAI 20201021_195624951 Granary

GranaryAI.20201021_195624951.Granary.zip

classObject commented 3 years ago

@Tinbum1 @johnolafenwa DeepStack seems to be returning a 400 Bad Request if it does not detect any objects in the image. I changed the exposure on an image that worked until it was so dark there were no detections. It returned a Bad Request. Your image returns a bad request for me as well.

Tinbum1 commented 3 years ago

@classObject Well figured out. I've just put some images in my input folder from this morning and they were processed without any problem.

aesterling commented 3 years ago

Screen Shot 2020-10-21 at 04 27 23 PM@2x

The speed increase is excellent, so thank you @johnolafenwa!

But, I too am getting 400 Bad Request errors from Deepstack. Is that something that @VorlonCD can adjust AI Tool to handle? Or is there something else that needs to change?

VorlonCD commented 3 years ago

Ok, this version of AITOOL will ignore BadRequest error 400 and ASSUME it means 'false alert' for NOW, but this really should be fixed on the Deepstack side since we DO want to know when an actual error happens and not ignore it.

@OlafenwaMoses the JSON response should be sent rather than error, but with no 'prediction' objects in the case of no predictions being found. Like it did before. Error 400 is great for bad image or other unexpected error.

THANK YOU for your hard work on this project, its amazing to use to prevent false security camera alerts!

AITOOL-VORLONCD.zip

johnolafenwa commented 3 years ago

Thanks @classObject @Tinbum1 @VorlonCD

Returning error on no detection is not by design, this is a bug. I have confirmed this on our side too. This would be fixed and a new update will be released soonest.

johnolafenwa commented 3 years ago

Hello everyone, the issue with no detections has been fixed. Run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x5-beta

If any further bugs are encountered, would love to know and address it as soon as possible

Tinbum1 commented 3 years ago

@johnolafenwa

Thank you, that's great.

Tinbum1 commented 3 years ago

It also seems to be using a lot less CPU.

AndrewHoover commented 3 years ago

The latest version of DS appears to have corrected my errors!

johnjoemorgan commented 3 years ago

I am running the Windows version of DeepStack and have not seen problems. However, given the fact that DeepStack will be Docker only I'll move to Docker.

And, thanks to you all! Basically gobsmacked at the time and effort your guys are putting in and the real results you are achieving. I know outpourings of thanks are not the norm here but I'msitting in the back of this rental car enjoying one hell of a ride.

On Thu, 22 Oct 2020 at 22:58, Andrew Hoover notifications@github.com wrote:

The latest version of DS appears to have corrected my errors!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/VorlonCD/bi-aidetection/issues/65#issuecomment-714553797, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK5MTVXO4EKXBC6UAGHRKWTSMBCDBANCNFSM4SVHHQ5Q .

johnolafenwa commented 3 years ago

Thanks @johnjoemorgan , this is great to know. Based on feedbacks and due to issues with using gpu in docker on windows, we would release a native windows version sometime in November. When possible though, i advice using docker as it makes it simple to run both on the edge and the cloud

NicholasBoccio commented 3 years ago

I am still running the GentlePumpkin AITool but WOW look at these improvements!

My previous MODE=HIGH times were 300-500ms MODE=LOW times were sometimes under 200 but mostly 250ish...

Blue Iris is running on RAID0 NVME Win10Pro I7-8500 32GB RAM and I have a separate Ubuntu box running Docker/deepstack:latest with the same processor/nvme/ram:

High https://imgur.com/9TS6Yrm Low https://imgur.com/EjFD6q0

There seems to be no difference between HIGH/LOW so I am just going to remove the ENV entry completely until we learn about whether that is still supported. Either way, these are INSANE times, and as mentioned, will allow for practically instant triggers. I have 7 cameras that face a busyish street (13 in total) and now could theoretically provide about 10fps on CPU of vision detection! I cannot wait for the GPU to get finished. Each machine has a Quadro 620, which isn't super powerful, but I am happy to keep the cpu as free as possible for blue iris.

Great work guys - THANK YOU for giving this more of your precious time! https://tenor.com/baakT.gif

NicholasBoccio commented 3 years ago

I should add that I actually have the GPU version (currently stopped) on that Ubuntu box, and it would run for a few seconds and then stop. Since I am new to linux/portainer/deepstack I don't know how to proceed with providing useful information to you - but really am excited to let the GPU get some work in, and hopefully improve results with better accuracy (assuming it will be more accurate)

johnolafenwa commented 3 years ago

This is great to know @NicholasBoccio. Thanks a lot . We are excited about the great things you will build with DeepStack. The Low is meant to be faster, we shall investigate why the speed are the same.

In the meantime, earlier than promised, we are happy to share the new GPU version is available on docker hub.

run

deepquestai/deepstack:gpu

or

deepquestai/deepstack:gpu-x4-beta

The accuracy is the same as the cpu version but the speeds are much higher.

Would love to know your experience using this.

We have a lot planned in the next few weeks both in the short term and the long term.

Tinbum1 commented 3 years ago

@ johnolafenwa Thank you, I'll be giving it a try as well as I had similar problems as @NicholasBoccio with it running for a while and then stopping.

Tinbum1 commented 3 years ago

@ johnolafenwa Can I check, should this GPU version run on Windows Docker Desktop?

NicholasBoccio commented 3 years ago

Update on the CPU version: The High/Low seem to be working now: High: https://imgur.com/ZL9ctb6 Low: https://imgur.com/QWE1RLP

These are with me sending as many triggers as I can in BlueIris, the times are obviously better when the triggers are more natural.

Regarding the new GPU... I cannot get it to start the server after the install or upgrade. So now I no longer have a semi-working GPU version.

Ubuntu 20.04 LTS Portainer 19.03.13 nVidia Quadro 620 / NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0

I will now try and go around Portainer and see if I can make it work and report back

Tinbum1 commented 3 years ago

I've tried installation of the GPU version on 3 different computers in Windows Docker Desktop. I can see that deepstack is activated in a web browser but using AITool there is this error,

2020-10-23 20:43:13.623174|Debug|AITOOLS.EXE|IsValidImage|127.0.0.1:82|Gate Day|None| Image file is valid: 1Car1.20201023_204307855.Gate.jpg|239|2||28 2020-10-23 20:43:13.624171|Debug|AITOOLS.EXE|DetectObjects|127.0.0.1:82|Gate Day|1Car1.20201023_204307855.Gate.jpg| (1/6) Uploading a 1359798 byte image to DeepQuestAI Server at http://127.0.0.1:82/v1/vision/detection|240|1||28 2020-10-23 20:43:23.631247|Error|AITOOLS.EXE|DetectObjects|127.0.0.1:82|Gate Day|1Car1.20201023_204307855.Gate.jpg| A task was canceled. [TaskCanceledException] Mod: d__30 Line:990:48|241|1||29

johnolafenwa commented 3 years ago

@Tinbum1 , nvidia gpu access is not supported on Windows Docker Desktop, except via WSL.

We have plans on bringing the GPU version to docker via a native windows version or a DirectML based docker approach.

@NicholasBoccio How did you run the gpu version? Typically you would start the gpu version with the command

docker run --gpus all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu

Note also that you need to have installed nvidia container toolkit. see https://python.deepstack.cc/using-deepstack-with-nvidia-gpus

Tinbum1 commented 3 years ago

I have WSL 2 enabled in Docker Desktop

image

johnolafenwa commented 3 years ago

@Tinbum1 Cool. The issue is gpu support even with WSL2 is a little complicated. Basically you have to do the following.

This is quite a long process and it is a preview feature.

Tinbum1 commented 3 years ago

@johnolafenwa Thank you for the instructions, I will give it a go this afternoon.

NicholasBoccio commented 3 years ago

Wow - The GPU version finally VWERKS!

The MODE=High/Low also works fine: High: https://imgur.com/6fKKQaw

Low: https://imgur.com/jGva8VP

I am still running this on a separate box, but I am confused at the times. These are similar to the CPU times.

Here is what nvtop shows with this running for about 30 minutes at High: https://imgur.com/iqTMeT5

BTW, I am happy with either the GPU or CPU being @ or under 100ms, that exceeds my needs - but I was expecting an order of magnitude improvement based on what the forum said about the GPU being 5-20x faster.

Thank you for all of this work! I am now going to get this working on the same machine that AITool is running on Windows.

NicholasBoccio commented 3 years ago

I have tried to install with both Ubuntu 18 & 20 Windows app and WSL 2.0 setup. I keep getting this error:

nicholasboccio@System:~$ sudo docker run --gpus all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. ERRO[0001] error waiting for container: context canceled

johnolafenwa commented 3 years ago

We shall investigate the speed difference on gpu. Thanks for the details.

It is good to know you got the gpu version working. On the error in WSL, follow this to install the nvidia container toolkit. https://docs.nvidia.com/cuda/wsl-user-guide/index.html#installing-nvidia-docker

NicholasBoccio commented 3 years ago

@johnolafenwa Thank you for the instructions, I will give it a go this afternoon.

I think I found our problem (assuming you were also trying to use the Docker Windows version: https://imgur.com/a/hNScb31 from: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#installing-nvidia-docker (about half way down).

Its almost 4 am, I will finish this with their recommendations when I get up. Feel free to jump ahead of me @Tinbum1

johnolafenwa commented 3 years ago

Hello @VorlonCD @githubDiversity @classObject @aesterling @NicholasBoccio @Tinbum1 @johnjoemorgan @balucanb @AndrewHoover

Thank you all for the feedback in the past days, we are excited to share the latest builds with massive improvements in speed for the FACE APIS. The prior updates applied only to object detection and now we have extended that to face detection, face recognition and face match.

Run

For CPU

deepquestai/deepstack:cpu-x6-beta

or

deepquestai/deepstack:latest

For GPU

deepquestai/deepstack:gpu-x5-beta

or

deepquestai/deepstack:gpu

Would love to know your thoughts and feedbacks on this. Note that the new face apis are not only faster but more accurate than the previous ones. Thank you all

balucanb commented 3 years ago

Thanks! Can't wait to try them out!