Open cczarnuch opened 4 years ago
The accuracy to which we'll need to report the time-frame will determine the frequency we'll need to capture images (or if we'll need to use video). If we're capturing a photo every n seconds / minutes and analyzing it versus having video source running at 24FPS or higher. If we need dwell time accurate to the second we'll probably have to go with video.
The accuracy to which we'll need to report the time-frame will determine the frequency we'll need to capture images (or if we'll need to use video). If we're capturing a photo every n seconds / minutes and analyzing it versus having video source running at 24FPS or higher. If we need dwell time accurate to the second we'll probably have to go with video.
I was thinking that we should use video as this would provide the most accurate results when tracking object across the screen, however we may run into the problems with processing power. I am going to look into ways that we can have this running 24/7. I wonder if a jetson nano would be powerful enough to do these computations on device instead of having to stream the video to a server.
Yea I agree, processing power or data streaming would be the bottle neck. We could try to compress the video data, since our model is likely going to downsample things to a smaller resolution during training anyway.
To better understand engagement with a space, the project should also be able to note the dwell time of each of trucks, cars, busses, pedestrians, cyclists etc. showing how long they engage with a space.
Add anomaly detection for objects that may spend a very long time on the street.