Open AndrewHoover opened 4 years ago
@VorlonCD Do the face API speed improvements that @johnolafenwa mentioned in this latest version of deepstack benefit us AI Tool users at all? I'm not aware of any features specifically for "faces." Thanks!
@johnolafenwa I've just downloaded the latest version and I think it has increased my processing times significantly, probably nearly doubled. I don't think AITool uses face detection, face recognition and face match so I'm surprised it would have an effect. If this is the case perhaps different releases would be an idea. I shall check my findings on another computer and pay more attention.
Would be interesting to see face detection (and maybe even license plate detection?) on AITool.
Would be interesting to see face detection (and maybe even license plate detection?) on AITool.
License plate detection and reading (OCR) of the plate would be an incredible addition. There are other tools offer it, but having it here as an "all in one" solution would be great.
@johnolafenwa is that feature outside the scope of Deepstack, or is it something you would consider?
This morning I spun up a brand new Photon VM on my esxi host. docker run -d -p 80:80 vmwarecna/nginx - the webserver was installed and started docker run /deepquestai/deepstack:latest - got deepquest installed docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:cpu-x6-beta - I found out that if the first line after this isn't "/v1/vision/detection" then I need to stop and restart the above command 3 to 5 times until that line shows up. Once it finally shows up that way and aitool can talk to it, I'm getting pretty horrible times: The photon vm has 4 logical cpu's and 2048Mb ram; is there a way to speed up the processing?
The photon vm has 4 logical cpu's and 2048Mb ram; is there a way to speed up the processing?
Add -e MODE=low And see if that helps
Thanks @NicholasBoccio; that seems to have helped significantly!
I know this isn't a deepstack suppor thread, but...does anybody know how to get back to the above view? I added the --restart always flag to the docker container last time I ran it, and now my vm just sits at a command prompt with no deepstack verbosity showing. :-)
I am new to Ubuntu/Linux, so I would also like to know this!
I know this isn't a deepstack suppor thread, but...does anybody know how to get back to the above view? I added the --restart always flag to the docker container last time I ran it, and now my vm just sits at a command prompt with no deepstack verbosity showing. :-)
Run below command to find deepstack container name
Sudo docker ps
Then run below command with the container name (assuming deepstack)
sudo docker logs -f deepstack
@aesterling OCR is well in scope of DeepStack. DeepStack is currently focussed on computer vision and ocr is within that target. The long term goal for DeepStack, is to support vision, language and speech. We don't have a timeline for the ocr feature but this is on our mind.
Hi @johnolafenwa - I am one of the AI Tool users as well and I am noticing that during ideal lighting conditions, the photos are identifying people 90% of the time, however during the night or darker conditions, deepstack is hardly able to identify people. (I am able to identify people myself by looking at the photos, but deepstacks is not)... is there some way that we can help improve the program identify these? I have also heard from some other folks about night time issues
@ncrispi @johnolafenwa - agree, night image detection has been bad for all my cameras. I pretty much have to stand in front of the camera at night with a sign saying "I am a person" :) One camera is 4k with infrared and the other is 1080p with reasonable ambient like from street lights. I'm assuming the dataset used was trained mostly with daylight images.
@classobject - I think you were experimenting with tweaking brightness/contrast on the images? Does that help much? Like, could we apply a set value to camera images from dusk till dawn? (great movie)
Hello @ncrispi @VorlonCD Thanks for reporting this. Can you share any sample images so we can reproduce this on our end and work on a solution?
@johnolafenwa - I deleted most of the older ones, but here are a few:
https://imgur.com/YLsz2Dp https://imgur.com/PqE39kq https://imgur.com/GvwWAg2 https://imgur.com/rMWl6un
I'll keep reviewing the logs and find any additional night time photos that do not pick up people.
@ncrispi - the detection would be a little better if you turned off motion highlighting in BlueIris (the last three you posted).
Is there any way we could use images from our cameras and somehow tag them in our local deepstack instances to help deepstack perform better on our specific cameras?
Here are some of my images of false detections at night. A lot of the good examples I had got deleted, but one of my cameras developed some spiderwebs yesterday which also caused false detections. I sometimes see DeepStack detecting a tree or pole in the distance as a person.
https://user-images.githubusercontent.com/28712950/98568653-f82b3a80-2276-11eb-89a3-3df0a91f4b9e.jpg https://user-images.githubusercontent.com/28712950/98568657-f8c3d100-2276-11eb-9bb3-1faaf8e8c01d.jpg https://user-images.githubusercontent.com/28712950/98568659-f8c3d100-2276-11eb-88a0-91e72f7a9ca7.jpg https://user-images.githubusercontent.com/28712950/98568662-f8c3d100-2276-11eb-9d75-55692a0b6e59.jpg https://user-images.githubusercontent.com/28712950/98571657-6291aa00-227a-11eb-8f24-dc4244d34d3f.jpg
@petermai6655 - I think we are looking for cases where it is very obvious that an object should be detected, but its not. Haven't had time to go through mine for the last few nights yet but there is always a cat or fox it misses for sure. False detection's may not be possible to fully prevent with this tech, but when it misses a human walking around your house at night thats an issue. For @johnolafenwa to correctly analyze the image there should not be any annotation, rectangles, text, etc.
@johnolafenwa
Here are a few more images from tonight where people were not detected. https://imgur.com/WLyO4Bw https://imgur.com/ivbpVDs https://imgur.com/YfbdJ5R
I'm wondering if it's currently possible to train DeepStack on our own as it might allow for better detection during darker conditions? How about training faces?
Thanks guys for these details. Would share details from our end towards resolving this before the week ends. @petermai6655 , support for custom training is scheduled for release this November. I believe this will open up a lot of more possibilities. Thanks for the patience
@johnolafenwa That is great news and one thing that I would love to take advantage of.
File attached where it failed to spot person at night.
@johnolafenwa - I have a couple of false detections here if it helps. Mostly it's working great!
This is with the deepstack:latest - any updates on your progress? Thanks for your awesome contribution!
You will notice that most of the time, the confidence of the false positives are much lower ( < 70% ) than accurate detections.
Setting a threshold/minimum confidence for the detection is a way of dealing with false positives.
You will notice that most of the time, the confidence of the false positives are much lower ( < 70% ) than accurate detections.
Setting a threshold/minimum confidence for the detection is a way of dealing with false positives.
I agree. However with the latest deepstack, most of my correct detections fall in the range of 50%-75% so I've had to lower the threshold to 50%.
I've been creating static masks when this happens. Are you using the fork of the AI tool?
I've been creating static masks when this happens. Are you using the fork of the AI tool?
Yes, I'm able to workaround the issue, just providing information to further the AI development.
Yes just for 3 days now and still playing around with the settings as I'm trying to capture the dog and cat also. Haven't had much success at night yet where the neighbours cat jumps over the wall in the same place to do poop. The camera is quite far so got the BI motion settings on max sens. Keep tweaking them. It triggers from BI for the cat but the AI tool hasn;t picked it up in the dark yet. We'll see again tonight. (nodered triggers a sprinkler on it ! lol) Trying to get rid of the cat using our nice lawn as a dumping ground)
Reduced the pixel movement to 25 now. Constant tweaking for your specific environment. Still got the trial BI version. Highly likely I'll pay for it. Exhausted all other free options.
Hello everyone. As @petermai6655 suggested, training on your own images can greatly improve accuracy. To this end, I am excited to share today that we have added support for training and deploying object detection on your own images with DeepStack.
End to End instructions for doing this is documented here https://docs.deepstack.cc/custom-models/index.html
Note that, this feature requires running the latest DeepStack,
Supported DeepStack versions are
deepquestai/deepstack:cpu-2020.12
deepquestai/deepstack:gpu-2020.12
and yes, nvidia jetson is supported to, just use
deepquestai/deepstack:jetpack
Do give this a try and would love to help with any issues and see how this improves the ability to customize deepstack to your needs.
And , DeepStack is now open source on Github, https://github.com/johnolafenwa/DeepStack
@johnolafenwa thats some impressive documentation! I can't wait to try it out.
Thank you for your hard work!
Awesome definitely going to go through this custom made option. Possibly to train it to recognize my neighbors cats. Although after changing mode to high and retesting did help.
Hello everyone, if you are trying out the custom model feature and running into any issues. Please watch this video we made to demonstrate the whole process. https://www.youtube.com/watch?v=wQKUQ6Y2n3Q
You can also check the updated docs https://docs.deepstack.cc/custom-models/ for guidance
Hello everyone. As @petermai6655 suggested, training on your own images can greatly improve accuracy. To this end, I am excited to share today that we have added support for training and deploying object detection on your own images with DeepStack.
End to End instructions for doing this is documented here https://docs.deepstack.cc/custom-models/index.html
Note that, this feature requires running the latest DeepStack,
Supported DeepStack versions are
deepquestai/deepstack:cpu-2020.12
deepquestai/deepstack:gpu-2020.12
and yes, nvidia jetson is supported to, just use
deepquestai/deepstack:jetpack
Do give this a try and would love to help with any issues and see how this improves the ability to customize deepstack to your needs.
And , DeepStack is now open source on Github, https://github.com/johnolafenwa/DeepStack
Jetson Version is AWEOME!!! I have went from 700ms with cpu (x3) to 450ms with gpu(x4) down to 178-200ms with the Jetson Nano.. at one point i get a few MicroSeconds Process times for very clear images.. ( i use 4k cameras and yes send 4k resolution at the AI because i want to give it as much resolutions as possible.. however with jetson nano i did Lower the res to 2k which still yeilded a 200~/300~ms process time.. Since AI Tools shows " Deepstack Url(s) " i am going to play running multiple jetson nanos for processing.. just out of curosity how it will process..
There are times i get..... 600 images sent to deepstack within a 20minute period.. (heavy traffic in backyard + 22+ cameras)
I have been secretely been testing and researching the best method of running deepstack, even purchased a physical server with 32 cores and will be installing a few quadro gpus to see how it can process.. and if i can get processing down to nano seconds that would be the most idea.. i have not yet tried with intel, however im sticking with Nvidia and GPU support via Docker/Unbuntu Installations...
Sorry for the long reply.. but like I said I have litterally spend the past 2 months working on different aspects of BI, AI, DS, HA Intergrations and processing to create a solution as a whole. Hearing that DS is open sources is amazing.. After the new years I defenitly plan on contributing as much as I can in both, R&D, Time, and even some investments if necessary..
Thanks. Dont know all to @ yet.. but i will soon.. i plan to be around more often and more active to contribute as i can.. Thanks to all for everything!!
@johnolafenwa thank you for making this opening source.. I have been using this for a longtime and like they support its getting. Also waiting on them release for NCS2
@ipeterski I'm using on Intel NCS2.. did you see better results on Jetson Nano 2gb? or better.. hoping to get one of those..
@sankeerthb I have the 4gb newer version, do not quote me but I think I read they are going to Move more towards the jetson nano, I want to see if I can potentially cluster 2 nanos together to speed up the process even faster my goal is realistically in the microseconds processing time frame
in my experience with research and testing running deep stack off a machine with just CPU support and running it off a PC with obviously the same machine but with a yep with the GPU addition with cuda was quite a bit faster so the Jetson nano does make a lot of sense to me
@johnolafenwa or anyone! Having issues with trying to get the custom model working- following the video but not working for me. see attached screenshot. Now my docker exp. is a whopping week so that is most likley the issue LOL. I had/have a docker image running on docker desktop, I assume I didn't need to stop/delete that before I started if that helps at all. TIA for any help
Trying to use colab and go through the steps. Getting stuck at this: Looks like the zip is missing the classes.txt. But just checked and it is there
!python3 train.py --dataset-path "/content/Dataset"
Traceback (most recent call last): File "train.py", line 466, in <module> with open(classes_file,"r") as f: FileNotFoundError: [Errno 2] No such file or directory: '/content/Dataset/classes.txt'
Got it
!python3 train.py --dataset-path "/content/Dataset/Dataset"
Now how do I update my container but with Portainer? Add a new Volume or Env.? The documentation and video is not clear.
sudo docker run -v /path-to/my-models:/modelstore/detection -p 80:5000 \ deepquestai/deepstack
Does this container replace the Vision detection deepstack? Sorry it's not too clear.
Hello @balucanb , i see your ran deepquestai/deepstack:cpu 2020.12, note that there should be a dash between cpu and 2020, so it should be cpu-2020.12
I believe the space in between is causing the error
@Yonny24 , you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack.
Perfect thanks. So I can use my original deepstack container and just redeploy it once I've added the new volume in settings? Also rather confused watching the video as it uses Colab Google cpu. I'll be using my local cpu and not colab cloud service I understand. Is this feasible? Maybe I got the wrong end of the stick? I used label to tag a specific animal that the AI often misses at nighttime so the idea was to train it to catch this object movement more efficiently.
@johnolafenwa WOW. This is why I never learned to code! LOL. Thanks so much John. I will try that. Question- understanding my first (and only) intro to Docker, coding, etc has been with this project, I have a vision-detection model running in docker desktop right now on port 8383,(working normal and fine) Can I run that one and the custom detection model at the same time on the same port or do I need to use a different port for the new one or can I only run 1 model at a time? I just read your reply to @Yonny24 I think they are describing the same thing I am asking... TIA!
@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA
@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA
Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)?
Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious. Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.
@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA
Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)?
Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious. Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.
Confused is the key word for me! Basically I am in the same position- I have a working copy and do not want to mess that up- took me to long to get it working bc of my lack of knowledge/skill with this. Like you I am stuck on getting the custom model I have trained to work. Not understanding how to make this new volume, I am trying to read the Docker docs now but it is all greek to me. You mentioned portainer, I have heard of it but don't know what it is really- is this part of Docker or some add on where you can put/make new containers ( I think that is the correct wording) I assume they are run locally?
@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA
Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)? Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious. Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.
Confused is the key word for me! Basically I am in the same position- I have a working copy and do not want to mess that up- took me to long to get it working bc of my lack of knowledge/skill with this. Like you I am stuck on getting the custom model I have trained to work. Not understanding how to make this new volume, I am trying to read the Docker docs now but it is all greek to me. You mentioned portainer, I have heard of it but don't know what it is really- is this part of Docker or some add on where you can put/make new containers ( I think that is the correct wording) I assume they are run locally?
Portainer is just another container but a nice user friendly interface to manage all your other containers without having to use command line.
Would the volume be created like this in portainer on the deepstack? Bind to the D drive where the train images were created using labelling?
What is this step also?
Appear to be making some progress running the training after labelling various snapshots. Not entirely sure what its doing. :)
@Yonny24 are you deploying portainer as a standalone or swarm? Also the last screenshot is exactly what it should be doing when it trains. Just FYI I had around 300 images in my train folder and I let it do all 300 Epoch's took me about 6 1/2 hours to get that done. the next step is where I am stuck. ergo- my standalone / swarm question
@VorlonCD Just curious if anyone here has thought about the ominous absence of the deepquest folks and the future of DeepStack viability. They haven't had a release in a long time, their website hasn't had much revision in a long time and from this post, they don't seem too interested in sales requests. https://forum.deepstack.cc/t/is-deepquest-dead/431
I'm just wondering with all of these wonderful changes, how difficult would it be to pivot this work over to another AI engine?