Closed jiiins closed 3 years ago
Seems like frigate should just do that post processing itself since it already has the decoded image in memory. Are you still using the Coral with YOLO downstream?
Reducing false positives is my personal highest priority. For my own house, I can't remember the last time I got a false positive. If you can provide me with some examples, I can look at a way to avoid them. A big benefit of object tracking is that I now have a bunch of heuristics I can use to filter out false positives.
Are you still using the Coral with YOLO downstream?
No, I'm using this Deepstack integration.
If you can provide me with some examples
Here are a few. I can't increase the threshold too much as it starts missing true positives. [deleted for privacy reasons]
Would it be possible to send some video clips with 30 seconds of footage before? I can run them through locally and see what adjustments I can make.
That is strange. I have my threshold at 75% and have never had a false negative. Could the algorithm be thrown off by the ‘top down’ view of your camera? I wonder if the human shape looks different from above. Most of my cameras are lower down. Still above 7 feet but they tend to view objects relatively straight on.
Would it be possible to send some video clips with 30 seconds of footage before? Unfortunately currently I'm not sure how, as I'd need to find sift through the recordings and I don't have the time!
I have my threshold at 75% and have never had a false negative Out of curiosity... how do you know you have zero false negatives? I'm checking against the cameras' embedded motion detection results.
It would be simple enough to make the bounding box optional in the best.jpg image. I do expect to solve this issue directly in frigate in the future by using some heuristics to filter out false positives or by improving the model with custom training.
Can I remove the pics above?
Yes
Reducing false positives is my personal highest priority. For my own house, I can't remember the last time I got a false positive. If you can provide me with some examples, I can look at a way to avoid them. A big benefit of object tracking is that I now have a bunch of heuristics I can use to filter out false positives.
As someone who gets quite a few false positives (probably due to my cameras being mounted high and looking down and my laziness about making more than 2 big regions per camera), I can fairly comfortably say that I think all of them would be eliminated by implementing a system where the dynamic automatic regions were spawned by motion detection similar to what was used to start analysis in the CPU version.
In every case where I had a false positive, it was something that was just sitting there doing nothing, but for whatever reason under certain lighting and conditions, the image recognition suddenly decides there's a person there.
If it hadn't started analysis because there hadn't been motion, it never would have found anything, also the dynamic regions would be better sized for the objects (I just use 2 big ones now), so that would also greatly improve recognition.
I second this request, I would like to train a custom model and saving the images without bounding boxes will save me going through all the raw footage and extracting the frames myself. My main camera over sharpens the image so I think a custom model will help increase accuracy and reduce false positives. Either option of an additional image or a config flag to turn off bounding boxes would be great.
I'm also interested in the option of having clean snapshots (as a new endpoint, maybe something like best_clean.jpg
), and combined with the bounding box info added by #161, it's all that's missing for me right now in frigate. It would be really useful for extracting the object, instead of having to work with the whole frame.
I'm also getting a lot of false positives, mostly because of cats and their shadows. Having access to the 300x300 px images without bounding boxes would definitely be very useful.
You should have everything you need to do this in 0.8.0
Would love to hear about anyone doing this with yolo or other heavy and accurate engine.
It would be great to have a clean snapshot (without boxes) and one for each detected object for post-processing.
For example, now I send the detected object to another model (YOLO) to filter out false positives.