Open VorlonCD opened 4 years ago
I actually forced my self to not check on AI Tool because I have zero time right now, but reading THIS, I couldn‘t hesitate to log in and state my appreciation! That‘s alot of amazing work and I‘ll check it as soon as my holiday starts in 4 weeks. Thank you for your contribution and I‘ll get back soon :)
hell, that‘s super amazing stuff!
and the comment is real comedy gold 😂
Ha, thanks! Old camera text files auto imported and saved to json coming soon. Also noticed that you dont really need to do masking on your end since in BI you can use the 'blackout masked areas' option on the 'AI' camera. Unless we think its worth it or doable to implement some kind of auto/temp masking to handle repeated detection's of something we dont care about like car in driveway all night. And a quick look seems that danecreekphotography's app doesnt need a second camera configured in BI? Downsides? Hmm, wonder if BI api is good enough so I can auto configure our version of the camera?
1.67 already implemented some means to avoid duplicate cameras, I‘m running without duplicate cameras myself, but I did not update the guide accordingly. It works by flagging confirmed alerts. We can discuss more details asap my holiday starts
FYI going to do another pull request when I can get it stable in a few days. I hope to have it so you can trigger a full list of customizable events/posts/etc on detection. Maybe even play a sound based on detection type.
Ha, thanks! Old camera text files auto imported and saved to json coming soon. Also noticed that you dont really need to do masking on your end since in BI you can use the 'blackout masked areas' option on the 'AI' camera. Unless we think its worth it or doable to implement some kind of auto/temp masking to handle repeated detection's of something we dont care about like car in driveway all night.
If you don't use a second camera stream then the black out areas will be blocked in the images and recording. I would think you'd want to block the area from detection but not entirely delete it.
@VorlonCD & @gentlepumpkin Wondering if either of you have considered adding an "other' section in the relevant objects list. It's possible someone would want to detect something else that's more specific with a custom deepstack model such as a mailman, mail truck etc. I guess you'd have to have a few options, {custom_model_name: "", relevant_labels: []}
Also one last thing, would it be possible to eventually add confidence limits per object type? Maybe you want to have 50% confidence on a person to catch all the people but 90+ on cars for the same camera.
Thank you for all the work that's been done on this. Setting it up was super easy.
@VorlonCD @gentlepumpkin
I have a solution to your todo about creating temporary masks. https://github.com/classObject/bi-aidetection/tree/dynamic_masking
My neighbors park in random places along the street so creating fixed mask files was all but impossible without masking out most of the image. I create a process to dynamically mask objects that are seen in the same location after a configurable number of detections. It's still a work in progress but it has been up and running for about a week and I'm no longer getting endless alerts every time my neighbors have friends over, park like they've been out binge drinking, or decide hey wouldn't it be cool to park an RV here? I think it answers most of your questions.
How to determine “roughly”
After each detection create a history object that stores the detected object’s coordinates and a count of how many times it was detected. There is a coordinate variance number to account for slight changes in the position returned by Deepstack. So if the current position is between (xmin, xmax, ymin, ymax +- variance) that’s in history == match.
How many repeats before we create?
The counter in the history object is used to determine when to create a mask. Each time an object is found in the history list, the counter increases. When the counter exceeds a user defined max history value, the object is removed from history and moved to a masked list. The history is cleared in a user defined number of minutes to prevent false positives. The max history value and history save minutes are setup in the Camera UI tab in the AItool.
How long do we hold the temp mask in place?
After each deepstack execution, search the masked list for matches. If the object is not found, decrease the counter. When the counter hits zero, remove the mask. The masked object counter is also user defined in the Camera UI tab in the AITool.
How to create temp mask image?
All that is required is a list of the coordinates returned from deepstack. This is all stored in the masked object list.
f**king spiderwebs trigger motion all night
Maybe try some spider bait from Lowes :)
@classObject - Oh wow, VERY cool! Unless you started from my fork in the first place I will integrate with mine and see how it works!! I was assuming we would have to modify each image with a mask before giving to deepstack, but this is much easier
@classObject - A quick look so far. I wonder if MaskManager.last_positions_history and masked_positions would be better off as Dictionary<int, ObjectPosition>, then use the Objectposition.key for the key in the dictionary. The lookups/containskey should be really fast that way. Of course it doesnt really matter unless the list count grows pretty high.
@VorlonCD Agreed. I started off using a dictionary but ran into issues because there wasn't a unique key for lookups due to variations in returned object positions. What do we search for if there's a range of matching values for the dictionary key? Instead it's using ranges in the equals method to determine if the objects are in similar positions. Sure there's another solution but didn't have a chance to research alternatives... but then again like you said if the list isn't that larger it doesn't matter much. 20-30 items will probably be a max. Mine never exceeds 10.
@VorlonCD Thank you for reviewing the code by the way! I forgot to mention above that the Objectposition.key is created by using the exact coordinates returned from deepstack so (xmin ymin xmax * ymax) = key. The problem is this doesn't take into account even slight coordinate variations. If there is any variation in the returned values, there is no match in the dictionary. For my setup I've found deepstack varies by a max of +-35 on each coordinate for each detection. Currently the key is just used as a quick reference point for debugging.
@classObject - Ahh I see that would be tricky now. I'm sure list is fine since it seems unlikely it will have many items in the list. Since cameras may have different resolutions, maybe we need to expose thresholdMaxRange = 35 in camera UI? Or I wonder if we could make that number based on a percentage of X or Y pixels in the image?
@VorlonCD Good suggestion. I like the idea of making the variance percentage based. Will make the update soon.
Great work by everybody! I'd like to point out that I can greatly reduce my server load by offloading all motion detection to the cameras and having them upload snapshots to the BI server which then runs AI detection on the images. This gets rid of the need/workaround of using secondary streams on BI. The largest issue I have (and would make it a hurdle for most users) is that Dahua cameras insist on putting their snapshots in the folder CameraName/Year/Month/Day/Hour/Minute/randomNumber.jpg when they upload so I have to use a secondary script from bp2008 (bp2008/ImageOrganizer) to sort out that mess before processing.
@classObject FYI I got your code integrated to my fork. Very cool stuff! It seems to work, I think I just need to tweak some of the settings. I noticed the history list contained a few detection's that were really close in size and width so could it be the Equals function isnt correctly merging a similar item into the history? That camera is 4k and the detection rectangle was at a distance so maybe it was still outside configured percentage? FYI build a nice little viewer of the history and active masks for troubleshooting - See Frm_DynamicMaskDetails.cs. Also I moved settings from settings tab to its own dialog - settings was getting too congested. Will eventually move more (or everything) to separate settings dialog. Will need ObjectListView and a few of my own routines unless you want to start fresh from my fork. Its got 99% of your stuff integrated I think. I do use my own custom logging routine so I can output to RTF log window in the app rather than NLOG
@VorlonCD make sure and pull the lastest version. The calculation for the change in the object's location has updated.
@VorlonCD The viewer looks awesome! Great idea. I will pull your code later and check it out. Your issue with detection's should be solved with my lastest update to the calculation. I had accidently checked in an incomplete version of the code. You might want to also checkout my changes to the camera config. It reads and writes the camera settings to json now and auto migrates older .txt configs to json as well.
@classObject - Ahh I might have missed the checkin from yesterday, thanks. I saw the JSON but I had already converted ALL settings to single settings class and writes everything including cameras to one JSON file. It has a very safe method of reading and writing the file - I had so many cases of corruption with BSOD's on one machine that I now check for null's, etc before reading the file. If it looks bad, I read a backup copy instead. I'm currently working on a better way of queuing files to be processed in another thread so that an overload of new files being generated will not be missed as easy.
@VorlonCD ok, cool. Haven't looked into all your changes yet. You've done a lot! Guess we both went for JSON :) Just pulled the code and setting it up now.
@VorlonCD I love the details view for masks! I've been tailing the log to get the data for debugging but this is so much better. To visually see the locations of the masks on the image is awesome. I would recommend hiding the isVisible flag. It's used to keep track of objects that are visible until deepstack is finished processing the image. After that the counters are updated and the flag is reset to false for the next run. That's why it always shows false.
@classObject - the columns are in order of the class, so if you move isVisible property near bottom most people wont see it. Do you think it would be helpful to show the max variance size rectangle in another different color? Not sure what calc from Equals to use without digging in a bit
@VorlonCD It might be helpful to see the max rectangle size on the screen. It depends on how busy the image becomes and the colors or shading. Maybe an option to toggle it on/off?
Calculations for the rectangle size max variance EDIT: (this requires some more thought)
@classObject - have at it. Go ahead. You can DOOO iiit.
@doudar - If we changed the prefix so it works with a wildcard, would that help for images uploaded by your camera? This fork already lets you monitor subfolders so I think that would be all that is needed.
@VorlonCD All these changes are awesome! Can't wait to try them out. Would it be possible to configure different deepstack IP's for different cameras?
I'm having an issue when multiple cameras are sending photos to the same deepstack server. It causes a pileup and by the time the trigger is sent to BI the object could be out of frame. Having the ability to override the default deepstack IP per camera would be much better than trying to run multiple copies of the AI Tools program.
@classObject - Thank you for implementing this change. This was the main thing I was waiting for :) :+1:
@doudar - This may work without your extra script in the latest version of my fork. (/CameraName/Year/Month/Day/Hour/Minute/randomNumber.jpg) If I can't get CAMNAME. from the name of the jpg, I fall back on looking for the INPUT FOLDER assigned to each camera. I think you would just have to give it a root path unique for each camera and enable the scan subfolders checkbox.
Wow! Looks good - I’ll check it out sand let you know!
@VorlonCD Thanks you rock! I'll compile and test it out later tonight. :+1:
Wow, very impressive list of changes. I was about to start developing a few features, but I'll check out your fork first. BTW, @gentlepumpkin do you need help with the review? Or were you not planning on maintaining this any longer. Looks like @VorlonCD went on an update spree with it, I'm so happy this is progressing.
Wow, very impressive list of changes. I was about to start developing a few features, but I'll check out your fork first. BTW, @gentlepumpkin do you need help with the review? Or were you not planning on maintaining this any longer. Looks like @VorlonCD went on an update spree with it, I'm so happy this is progressing.
WEEEEeeeee? :)
Yay, the feature I was planning to add was already implemented by @VorlonCD, which is to use several deepstack servers, with some kind of circuit breaker (as I have a much more powerful desktop but it's not always turned on like my low powered Blue Iris server). I really like the improvements you made to it
Any chance you can build a new release. My trial expired for visual studio so I can no longer compile from src. Thanks.
@SHerms - I believe visual studio community version is free and lets you compile: https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&rel=16
I'll probably be doing an official release in a few hours anyway.
GentlePumpkin, I LOVE that with this app BlueIris is better than Sentry in many ways and free! Thank you for all the work on this.
Got a bunch of stuff for ye... Hope you can integrate without too much trouble. I believe you have not updated anything since the fork I created a few weeks ago.
Blueiris info class - Created the drop downs for "new camera" name and "input path" that contain actual blue iris settings from the registry (assuming blue iris is installed on same machine)
DeepStack class and tab - If you run the WINDOWS version of Deepstack on the same machine, it will detect it, stop/start it and let you totally replace the stupid deepstack.exe control that comes with it. The AITOOL log window will output deepstack messages if AITOOL actually starts deepstack. Auto start deepstack when aitool starts. Will auto change port if different than configured in aitool. Perhaps docker integration in future, I see there is a nuget package for it. I might have accedently de-compiled their exe to see exactly what it did :)
Updated Inputbox to support dropdowns and perrrty-er-er
New logging class for writing log and History csv file. Queues up writes, writes in another thread so the UI doesnt have to wait for it, and also waits for the log file to become available to avoid shared exceptions that I still see. (multiple instances or threads cause much trouble)
Better FileSystemWatcher events - Wait ONLY as long as needed for the file to be come available, wait in thread before continuing since deepstack can only take one file being processed at a time. (unless I'm wrong? Its been known to happen. Don't tell my wife.)
New LOG tab with color to highlight errors, shows the calling function, etc. Must enable "Log everything" to see everything.
Better error checking, timing and logging in DetectObjects and other various functions. Trying very hard to avoid unexpected NULL object related errors.
Start with windows option (non service - that would take a lot more work)
SETTINGS class, save all settings to aitool.settings.json file. (But STILL need to integrate CAMERAS to this json file)
Detected history list items are green now
Object label color set to transparent rather than solid so you can see what is under it
Fixed tab order on cameras tab
Changed to default 64 bit process. This allows the deepstack class to correctly get the command line parameters of its running processes like port, mode, etc. And why not? Back in my day we had 3gb memory limit and we LIKED IT :)
Obligatorily "Misc fixes and enhancements"
TODO: Create a true service that avoids use of 3rd party tool and lets tray icon/ui communicate with service running in background. Service will do all actual work, so need to separate all UI related stuff in distinct classes. Still need to research.
TODO: I would really like to figure out a way to avoid triggers in cases like this: Car parked in driveway all the time, fucking spiderwebs trigger motion all night and end up with just as many alerts as normal. Thinking something along the lines of temporary masking based on repeated detection of same object in roughly the same area. How to determine "roughly" may be tricky, and how long do we hold the temp mask in place? How many repeats before we create? How to create temp mask image? We can doo0 it. Rick and Morty forever. Need input!
TODO: Use "FastObjectListView" for the history. Should be MUCH faster since can have 1000's of items without slowdown but may take a bit of work converting everything to classes - but can directly use camera class maybe?
TODO: Store Cameras in json settings file.
Wish list? Suggestions?
Vorlon