Closed jkourou closed 2 years ago
Hey @jkourou, thankyou for raising the ticket! I will look into adding support for COCO RLE and may have a few followup questions to make sure I can add exactly what it is that you need. So keep your eye on this ticket!
In regards for supporting other shapes, what kind of shapes did you have in mind?
I have just done some research and was unfortunately not able to find precise documentation or 1 - 1 mappings of what COCO RLE crowd datasets look like on images. Might you have any datasets/documentation that would make it easier for me to visualize the process of labelling images for COCO RLE format and what the end result looks like? Thankyou!
Hello @OvidijusParsiunas and thanks for the reply.
So in regards to COCO RLE and how would it look, well since the RLE represents a sequence of pixels, I would imagine a painted area (with opacity, and different color depending the category), painting exactly the pixels in the RLE mask.
Again for the tool, apart from using any selection like rectangle, polygon, lasso etc tool and include every pixel in the shape in the mask, you can easily also mark an area with a brush/pencil ranging from 1 pixel width to several.
I do not have any other example since not many tools support this option which is strange, since this is a valid format and used.
Also as an added feature it would be nice to be able to actually save COCO annotations either in polygon or RLE format and also be able to actually transform existing annotations from one format to another.
I do not know if I have covered your questions, but please let me know how can I further help.
Thx!
That sounds interesting! The pressing matter for me currently is the fact that I can't seem to find real examples on what these datasets look like to get a better understanding on how to implement this format inside the myvision.ai tool correctly. Do you have any examples that you could share? Thankyou!
Here https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch/#coco-dataset-format either in the video or just scroll down to annotations is a simple explanation and example.
Also, below is an example RLE mask for a 704x520 pixels image:
"annotations": [
{
"segmentation": {
"counts": [
191929,
3,
514,
7,
510,
9,
508,
10,
507,
12,
505,
13,
504,
14,
503,
16,
501,
17,
500,
19,
498,
20,
497,
21,
496,
23,
494,
25,
492,
27,
491,
28,
490,
28,
490,
29,
490,
29,
491,
28,
492,
26,
494,
24,
496,
22,
498,
20,
500,
18,
502,
16,
504,
14,
506,
11,
509,
7,
513,
3,
159117
],
"size": [
520,
704
]
},
"bbox": [
369,
0,
30,
53
],
"area": 539,
"image_id": 1,
"category_id": 1,
"iscrowd": 0,
"id": 1
}
]
I hope this helps.
That looks good, I'll investigate the options of creating optimal UX around this format and report back on my findings soon.
I am unfortunately working on 2 other open source projects and I have a full time job, so I apologise if my communication will be slow. But I am fully committing myself to this ticket and will keep you updated on the progress :)
Thanks for the response and interest!
Hey, I did some UX analysis and have came up with a number of options that could be used to allow the user to label objects for the RLE format:
The first two provide a unique way of labelling shapes - which can also be observed in advanced drawing tools like photoshop when creating object masks. However, whilst being precise, they carry a disadvantage of creating a lot of room for error as it can be quite difficult to contour objects with a mouse, especially if the user only has a mouse pad. Additionally, the implementation of such functionality would require a lot of new code as it is a new drawing approach that requires changes in various areas of the UI and the implementation of more functionality to make it efficient, e.g. undo and eraser buttons.
The third approach seems like the quickest win as I can reuse all of the existing code for polygons and only edit the dataset generation and parsing functionality.
Let me know what you think and if you anticipate any issues with the third approach. Thankyou!
Hey.
Regarding the 3 approaches to the tools, these are correct. These I had in mind myself. You can avoid the fill tool and when a brush tool closes a path you can fill it in automatically, for example.
You can of course begin by using the existing polygon tools and adding the generation and parsing functionalities for the new RLE masks. So you can give the user the option to, either create a dataset from scratch or load an existing one, with RLE masks.
That said, RLE masks allow for very specific and small objects with irregular shapes to be correctly annotated. So in terms of UX the brushes combined with the zoom functionality is the correct way, since this will give the user the level of control he needs in order to make this work correctly.
But the generation and parsing must be done, so this can be the first step. Then on a next feature you can add a simple brush with width and an eraser.
Hope this helps and let me know what you think.
Thanks!
Hey, thanks for the reply! I think I will go ahead with the initial polygon approach to get something out there for this feature, then aim to get the brush capability started at a later stage as I currently need to ship a couple of features on my other open source projects.
I will create a new branch for this. To note, I usually push my commits in big bulks as I sometimes work remotely with no internet, so if you don't see any progress in GitHub, I am likely still working on the feature offline :)
Once again, thanks for communicating this feature and we should hopefully see this feature out in MyVision.ai soon!
Sounds fine.
Thanks again!
Code for this feature is going to be added to the following branch: https://github.com/OvidijusParsiunas/myvision/tree/issue/coco-rle
Hi @jkourou, I have been actively working on multiple other open-source projects and have a few more to go before my hands are free to progress on this feature. It is unfair to make you have to wait and I am troubled that I cannot get to this any sooner. Alas because I do not see that I will be able to progress on this in the near future, I am going to temporarily close this issue until I can work on it again. The branch is by all means still open and you are more than welcome to make any contributions to it as you please and I am more than happy to provide feedback and advise in the meanwhile. Regards.
Hello!
Do you have plans on implementing a support for COCO RLE format annotations and not only polygons?
{"segmentation": {"counts": [], "size": []}}
Also if it is not your priorities, any suggestions how would you go to implement such a feature in terms of your apps architecture?
Thank you for all the work till now.