Open eellison opened 5 years ago
I can work on this. I will iterate on this until we're happy with where we're at.
Moved this to lower priority while we push on the other features necessary to start the evaluation phase at MTC
Per @eellison - the best way to go about this is to create a set of test images (with a mix of clear, blurry, dark, light, pixelated, etc.) that we can use to compare iterative changes to our image recognition algorithm. The ~115 records we capture as part of our initial evaluation can serve as the basis for the set of test images, and we can add some of our own as we figure out the more specific weaknesses of our alignment/processing steps.
The backend Python code gets a stream of static images from the camera, and tries aligning every image it receives to the chosen paper record. It finally "decides" that it has a good frame after 5 consecutive frames all align well to the paper record. The issue here is that sometimes the 5th frame that it takes isn't necessarily the best one for annotation (ex. it could be blurry, too dark, too light, etc.). Ideas for how to improve this: