Manually extract the moto out of each photo webcam_i.png from the database into moto_i.png
Because moto_i.png is a sub-image of webcam_i.png - we know that we can find an exact match of it in the original photo and find its expected coordinates using Template Matching OpenCV functions
This means that we can use supervised training methods
Pick an arbitrary subset of moto images as training set
Pick the remaining (that is, excluding webcam_i.png if and only if moto_i.png is part of the training set) photos as testing set
Run the moto detection on each photo of the testing set. Compare its expected coordinates against its actual one (found as described hereabove).
Set an arbitrary distance between the points as a way to distinguish a match from a mismatch
From there, one can do multiple things:
Work on the moto detection algorithm to improve its accuracy
Try to boost the performances of the algorithm, for example by minimizig the training set
There might be more advanced approaches, but this approach should have a high ROI.
Here is how I believe it can be done:
webcam_i.png
from the database intomoto_i.png
moto_i.png
is a sub-image ofwebcam_i.png
- we know that we can find an exact match of it in the original photo and find its expected coordinates using Template Matching OpenCV functionswebcam_i.png
if and only ifmoto_i.png
is part of the training set) photos as testing setFrom there, one can do multiple things:
There might be more advanced approaches, but this approach should have a high ROI.