Open gsvidaurre opened 2 years ago
I think there could be 2 ways of doing this:
Specify a square region of interest that encompasses the circular entrance hole, but don't limit motion detection to a particular part of the frame. Instead, compare the spatial bounds of each contour of movement to ask whether these fall within the region of interest in a conditional statement. Return data saved to the .csv about whether motion occurred in that area or not
Specify a region of interest that doesn't include the circular entrance hole, and run motion detection for that area only
I like the first option better because it means videos will be recorded for all motion detected (important for validation and also capturing relevant behaviors), and we can just filter out recording events after data collection using metadata.
Install OpenCV on an RPi with the tracking system set up, then start testing option 1
If it isn’t too computationally intensive, then I’d like to be able to perform this contour identification throughout the duration of each video and return the number of frames in which motion occurred inside vs outside of the region of interest.
This should probably be a separate video processing function or module that runs about once a day, and processes videos already recorded.
But contour detection doesn't need to be performed on full videos for my purposes. I just want to know whether the bird was inside the nest or in the entrance when motion was detected. So maybe it's possible to save the pre- and post-motion images for contour detection within a region of interest on the fly (ask whether the bounds of contours are inside or outside the region of interest, or both, and the respective sizes of these contours), or save these as files to process in the night for each video (when less videos should be recorded overall).
I want to return a .csv coordinates of the 5 or 10 contours with the largest area. Then in R, figure out whether these contours pass the pixels marking the midpoint of the frame, or maybe thirds
maybe we consider this enhancement for after publication?
Videos are recorded through motion detection across all 3 color channels. I'd really like to be able to detect the contours of groups of pixels that changed within the video frame. If we can detect contours of movement inside the frame, then we can use general spatial locations to determine whether the movement picked up be the camera captured activities related to parental care (e.g. movement at the nest container entrance or inside the container) versus other activities (e.g. the parents feeding on the floor of the cage, which the wide-angle camera can pick up through the entrance hole).
Contour detection should be possible using OpenCV tools that are already available in Python. I found some tutorials on how to do this: