Closed aryarm closed 5 years ago
While reading back over some old issues, I realized that this one in particular is completely unreadable.
Basically, I meant that we could try to identify the bridge of an ant using its outline rather than manually selecting it using MATLAB. We briefly discussed coloring the edges of each bridge so that the pipeline could recognize it by color.
I just committed code to detect the red lines used to mark bridges in d32b1670bd412dfab26315108ea893bcf33f90ad. However, this needs to be tested on more images than just the one (the video I pulled it from has lighting changes during its length, for one source, and other videos would probably also be good), and the detected red rectangles still need to be turned into ROIs.
Change in goals for this: the new objective is to take a picture of some region and find red polygons in the region to mark out the ROIs.
Beyond just updating the ROI detection method, this also requires enabling rotation of the ROI and having complicated boundaries on the ROI that it detects when ants cross.
As of e7b57d7, I have code for handling the red polygon ROIs. I have not yet gotten a chance to test it, but it handles most of the functionality that I want.
It automatically detects a red polygon placed inside an image and finds the smallest bounding rectangle that fits the detected polygon (rotated to any angle) and saves the rectangle to a file, as well as the coordinates vertices of the polygon that was detected. From this, croprotate.py rotates and crops the video such that only the bounding rectangle ends up in the ROI output.
If you're far in the future and there are a bunch more comments after this point, then you can probably stop reading this comment here.
However, track.py has not yet been changed from its initial form. Its output is unchanged from when I inherited the code. My present thinking is to have the matlab code output the first X,Y location that the ant is seen at and the last X,Y location, and then turning that into which side of the polygon the ant entered and exited via, but I'm not yet entirely sure that this is where I want to go with it.
I'm also not yet entirely sure how I want to load the polygons into the rest of the code. My present inclination is to modify bridgedetect.py to store the polygon vertices relative to the ROI (where they will be after rotating and cropping), and then passing them off to a new python file which goes from the Xi,Yi,Xf,Yf 4-tuples to which lines were crossed. However, this is the end of a week of work, so I'll cross that bridge and decide what exactly to do next week.
I modified track.py as per my thoughts in the above comment, and also added in the first and last frames during which the ant is tracked (it contains frames after the ant leaves and before it gets given up on, though). I still haven't loaded the polygons into the rest of the code, however.
Also, an idea which I thought of but don't have time to execute: the roidetection should only look at contours which have other contours inside of them. All of the random noise bumps are continuous, solid wholes, while the bridges appear as hollow polygons. If I exclude contours without interior contours, that will remove the noise.
This will bypass the issue sometimes found in real data wherein images are too noisy. This will also allow me to be much less aggressive with the erosion (or maybe not erode at all?), to be more sensitive to real polygons, as smaller polygons are sometimes wholly erased.
I modified the roi detection code to only look at contours which have contours inside of them. I also removed the opening and dilating steps, because opening is no longer needed to remove random noise, and dilating would fill in the entire internal area of some polygons.
However, the polygon detection is very inaccurate with vertex positioning. I need to tweak parameters to correct the erroneous polygon detection, which currently often results in images like these:
As of merge 94760c, this has been improved a lot. The algorithm is now getting a majority of polygons correct.
The worst failure case, and the one which needs the most attention is that it gets pentagon vertices wrong. Here's an example of a failure: I suspect that the failure is caused by the pentagon being partially occluded in the source video, which may lead to inaccuracy in the algorithm. I'm not sure how to address this, especially because some of the pentagons on our nest setups seem to have partially occluded pentagons while others don't. This issue is the most important one to tackle, because these erroneous ROIs will not suffice for tracking.
The other more common failure case is the insertion of extra vertices, as happened here: and here: Some also have vertices slightly moved from their original positions, as here: These are less important as they probably won't cause issues with the pipeline's functionality, though it'd be nice to have them resolved.
With the creation of a script to allow manual specifying of ROIs, and the use of important edges to restrict tracks to entering and exiting on important edges, I'm going to call the occasional inaccuracy in the pipeline acceptable.
we can use opencv and python to find pixels that match a given hue and then find clusters of points that represent bridges the extreme x and y values of the clusters will indicate the coordinates of the rectangular area