Open jeffstjean opened 1 year ago
Just putting your thoughts into writing, more work to build on this in the future.
Class: TargetDetect TargetDetect()call tkinter TargetDetect(color_masks:list) getmasks()->list setmasks()call tkinter color selector to modify color_masks in the list setmasks(color_masks:tuple) manually defined masks in a list calculateGPS(img-file-address: str, telemetry-file-address: str)->[tuple(long/lat), color_mask] calls the other functions calculateGPS(img_raw: PIL, telemetry_list:list )->[tuple (long/lat), color_mask] calls the other functions objectDetection(image_raw: PIL)->[tuple (pixel), color_mask] This is the vision system code that we are bringing into here imageCoordinateToWorldCoordinate(telemetry-list: list, image-coordinate: tuple(pixel))->tuple most of the main.py code is copied and pasted into here writeRealWorldPoint(tuple (long/lat), color_mask) just writes to our file which stores the real-world points and associated_color amalgamateClusters(color_mask)->tuple(long/lat)This reads the entire file after the manual flight and then clusters/filters the color_mask to find the average of long/lats
Right now, the GPS coordinate calculation is done in the main script. Instead, this should be performed by the
TargetDetect
class so that its "inputs" are a frame and some telemetry and its "outputs" are GPS coordinates (along with other metadata like the detected pixel coordinates and any useful CV2 frames for debugging).I say "inputs" and "outputs" because this won't necessarily be one function that does everything. Instead, we might break it up into multiple smaller functions that have specific focuses. Think about the best way to break this up so that functions are small and have a single focus. We can add "aggregate" functions as well that do multiple things but keep the heavy-lifting in small bite-sized chunks for readability.