Closed pliablepixels closed 4 years ago
A few humble observations:
Maybe comment out all of the monitor-specific setting in the example objectconfig.ini
file that you provide in the repo? I have run into issues before where, after having deleted and recreating a couple of monitors, I end up with a monitor whose identification matches one of the examples in that file. That can be tricky to debug.
So, do I understand correctly that switching between tiny yolo and full yolo would just be a matter of switching the weights, so the example file might look something like this?
# use these weights for full yolo v3
object_config={{base_data_path}}/models/yolov3/yolov3.cfg
object_weights={{base_data_path}}/models/yolov3/yolov3.weights
object_labels={{base_data_path}}/models/yolov3/coco.names
# use these weights (and comment out the above) for tiny yolo v3
#object_config={{base_data_path}}/models/tinyyolov3/yolov3-tiny.cfg
#object_weights={{base_data_path}}/models/tinyyolov3/yolov3-tiny.weights
#object_labels={{base_data_path}}/models/tinyyolov3/yolov3-tiny.txt
As long as the sample is clear, I think that works.
I'll be adding tpu (#283) support on a google coral usb stick, but before I did, I wanted to make some fundamental changes to how the config is set up (
objectconfig.ini
) to make it more intuitive for the future. Obviously, these are breaking changes, so I wanted to share it, if anyone had an opinion. I've been making too many breaking changes in every release and wanted to reduce it with "one big breaking change" šremove hog as a model for person detection - does anyone use it? Note that hog will still be available as a face training and detection algorithm.
Rename zmes helpers to a proper pip package and upload it to pypi, and have mlapi use it. That way, I don't need to keep updating mlapi as it always falls behind
I also want to put in a construct to limit the amount of parallel detections that are launched by the ES as it takes up memory and if you are on a GPU, it can segfault out. As of today, the ES launches a "fork" for each event, so obviously, I can limit the # of forks, and add a queue to manage pending detections. However, the problem is each "fork" lasts for the full duration of the event (till the alarm closes). So limiting forks may not be helpful. The other option is to keep a fork "waiting" till the # of detections reduces to the max rate. Any other ideas?
Here is a sample config I have in mind