Open abfleishman opened 6 months ago
@abfleishman interesting - that spec is new to me but I'm taking a look now. At first glance this should be possible.
This is somewhat subjective but I don't love that it seems like the lowest level the observationLevel
field supports is to the media
level (i.e., you can associate an observation with an image). For ML training purposes it's much more useful to go down to the object level (i.e., associate an observation/label with a localized bounding box within an image), and Animl's data model natively supports this, as does the COCO for camera traps JSON format that you can export from Animl.
In my (admittedly somewhat limited - I'm not an ML engineer) experience the best practice is to crop out animals from their backgrounds when training classifiers, so having that level of granularity as well as bounding box coordinates in your label/annotation data helps a ton.
@nathanielrindlaub They do support object (bounding box) level annotations See the bbox section and down
Camtrap DP defines an open frictionless camera trap data exchange format. It would be really nice if animl could export data to this format easily (as well as reading data from this format!).
See this repo