Open JLC827 opened 2 years ago
Hi, i think it may work. Could you maybe send me an example of how your annotation input would look like ? Thanks and best!
Ah great to hear, thank you.
The annotation data is coming from the citizen science website Zooniverse. The example below shows the annotations for a series of frames. The plan is to convert this into the same format as VIA annotation data.
(For each annotation we also have the corresponding image name and some other info. This is just an extract of part of the annotation data.)
Hi,
I'm interested in using the SIPEC pipeline to perform identification, counting, and behaviour identification for primates and other animals.
Would it be reasonable to use bounding boxes as pseudo segmentation masks for training the SegNet stage? I.e. treat the entire area enclosed by the bounding box as a mask.
The data I am working with will be in the bounding box format because it's quicker and easier to label up. And if I'm understanding the paper correctly the segmentation masks are used to just generate a bounding box for the later stages, so the precise segmentation is unimportant.
Please let me know if you can think of any issues that might arise with this approach. It would make things much simpler if this approach works, as otherwise I will have to consider an alternative object detector and have to slot that into the pipeline.
Thanks!