Furthermore, if somebody still wants to take a run at #212 I wonder if this summary in Ethan's blogpost might provide a nice scaffold for that discussion?
We have to: 1) transfer data from the field; 2) identify ground control points; 3) combine 100s of individual drone images into one orthomosaic; 4) crop the ortho into pieces for the model; 5) run the model; 6) combine the crop-level predictions; 7) output the predictions; 8) archive the predictions and transfer them to a web server for visualization (which also involves a bunch of steps to display the imagery itself).
Which part of the content does your suggestion apply to?
How could the content be improved?
@ethanwhite posted this recent preprint, describing a Python+Snakemake workflow for monitoring wading birds in the Everglades using drones (importantly different from wading birds using drones in the Everglades 😅). I thought it could be a good fit for the Uses of Image Processing in Research list.
Furthermore, if somebody still wants to take a run at #212 I wonder if this summary in Ethan's blogpost might provide a nice scaffold for that discussion?
Which part of the content does your suggestion apply to?
https://datacarpentry.org/image-processing/01-introduction.html#uses-of-image-processing-in-research