In the process of checking the quality of the annotations by a reviewer, FiftyOne allows to compare the annotations made by a labeler and a ground truth in the same image, and change the color of the elements labeled depending on the number of mistakes made. The review process would be speeded up if not only the colors of the labeled elements could be modified, but also the color of the text of the attributes.
What areas of FiftyOne does this feature affect?
[x] App: FiftyOne application
[ ] Core: Core fiftyone Python library
[ ] Server: FiftyOne server
Details
To give additional information, I will show an example of what I am referring to. My colleague and I have labeled some images of the dataset Lagenda with our particular attributes. Those are, as example, the occlusion of the person ("Oclusión"), if it's a reflection in a mirror ("Reflejo") and if it's real or not ("Foto"). To check the mistakes made by my colleague, I compared his annotations with mines, and, depending on the number of errors, I changed the color of the annotation:
Checking the attributes, we can't see immediatly which one is wrong, as all of them appear in the same color and we have to compare one by one with the ground truth:
The wrong attribute of the image above, is the occlusion ("Oclusión"), that should be set to "Yes" instead of "No". If I could change the color of the attributes, as I show in the next image, it would be faster to see the mistake:
I didn't find in the docs that this was actually possible, but if it was, I would be pleased to get a reference to the feature.
Thanks in advance.
Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
[ ] Yes. I can contribute this feature independently
[ ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
[ ] No. I cannot contribute this feature at this time
Proposal Summary
In the process of checking the quality of the annotations by a reviewer, FiftyOne allows to compare the annotations made by a labeler and a ground truth in the same image, and change the color of the elements labeled depending on the number of mistakes made. The review process would be speeded up if not only the colors of the labeled elements could be modified, but also the color of the text of the attributes.
What areas of FiftyOne does this feature affect?
fiftyone
Python libraryDetails
To give additional information, I will show an example of what I am referring to. My colleague and I have labeled some images of the dataset Lagenda with our particular attributes. Those are, as example, the occlusion of the person ("Oclusión"), if it's a reflection in a mirror ("Reflejo") and if it's real or not ("Foto"). To check the mistakes made by my colleague, I compared his annotations with mines, and, depending on the number of errors, I changed the color of the annotation:
Checking the attributes, we can't see immediatly which one is wrong, as all of them appear in the same color and we have to compare one by one with the ground truth:
The wrong attribute of the image above, is the occlusion ("Oclusión"), that should be set to "Yes" instead of "No". If I could change the color of the attributes, as I show in the next image, it would be faster to see the mistake:
I didn't find in the docs that this was actually possible, but if it was, I would be pleased to get a reference to the feature.
Thanks in advance.
Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?