Open naglepuff opened 1 year ago
Having different parameters shouldn't break training, but will make the prediction less consistent. Adding new images with different parameters and then labelling anything in those new images could degrade the performance on the original images until enough labelling is done. I think we currently save the parameters used as a comment on annotation with the initial epoch generated by the algorithm. This should be readable to make it use the same parameters. We could expose this information in another location if it would be more useful.
This should be fully resolved via various changes already in master and with #119.
This came up on a call. There was concern about a scenario where users run SuperpixelClassification on a folder with images, generating superpixels, and doing some active learning before adding new images to the folder, and rerunning SuperpixelClassification.
Rerunning SuperpixelClassification will generate superpixels and features for the newly added images, but what if the parameters used to generate those features are different than the initial run of SuperpixelClassification? What are the implications of this situation? Does it break training? Is there something we can do to prevent this/warn users?
@manthey