We just wrote some scripts to analyze the performance of Megadetector and Megadetector + classifier pipelines at the object and sequence level, so a developer with the appropriate AWS creds can run these scripts locally and generate reports on how models are performing for a given Project.
However, it might be useful to expose this feature to users, perhaps including it in the getStats return payload. Roughly I think this would entail the following:
[ ] A UI that allows users to enter all of the necessary config params (importantly, the "validation" class mappings are key and must be manually entered).
[ ] Package the analysis code up and deploy as a task
We just wrote some scripts to analyze the performance of Megadetector and Megadetector + classifier pipelines at the object and sequence level, so a developer with the appropriate AWS creds can run these scripts locally and generate reports on how models are performing for a given Project.
However, it might be useful to expose this feature to users, perhaps including it in the
getStats
return payload. Roughly I think this would entail the following:task