After finishing my thesis in May, I started work on a new version of the experimental strategy. Unlike the old version, this one would run the segmentation step in parallel for each drone image, making it much faster and more sensitive.
It works by running a portion of the old segmentation script for each drone image and then running the rest of the segmentation step (ie the watershed algorithm) together for all of the drone images. The second portion of this (ie the watershed part) resolves conflicting plant shapes in an ensemble-like approach.
I also added training and testing steps to the pipeline. So users should be able to create their own trained models, test those models, or both. All they have to do is provide a truth dataset. Documentation for this is still forthcoming.
After finishing my thesis in May, I started work on a new version of the experimental strategy. Unlike the old version, this one would run the segmentation step in parallel for each drone image, making it much faster and more sensitive. It works by running a portion of the old segmentation script for each drone image and then running the rest of the segmentation step (ie the watershed algorithm) together for all of the drone images. The second portion of this (ie the watershed part) resolves conflicting plant shapes in an ensemble-like approach.
I also added training and testing steps to the pipeline. So users should be able to create their own trained models, test those models, or both. All they have to do is provide a truth dataset. Documentation for this is still forthcoming.