From discussions with users in the MapReader and AI for Biodiversity communities, there is a clear need for Scivision to allow for and facillitate multi-stage image analysis pipelines that might include more than one machine learning model.
For example, a user may first want to identify patches from maps with buildings using the mapreader model, then secondly use a different model to classify the shapes of the buildings.
To allow this the user would need to be able use the output of the first model as inputs in a subsequent model. This might require a change to what's returned by scivision model predict, potentially requiring stricter limits on what is returned to make sure the format is compatible with other models
From discussions with users in the MapReader and AI for Biodiversity communities, there is a clear need for Scivision to allow for and facillitate multi-stage image analysis pipelines that might include more than one machine learning model.
For example, a user may first want to identify patches from maps with buildings using the mapreader model, then secondly use a different model to classify the shapes of the buildings.
To allow this the user would need to be able use the output of the first model as inputs in a subsequent model. This might require a change to what's returned by scivision model predict, potentially requiring stricter limits on what is returned to make sure the format is compatible with other models