Closed luke-iqt closed 3 years ago
Hi @luke-iqt 👋
I'd argue that not allowing None-valued ground truth labels is a feature, not a bug 😄
The rationale is that None
means "missing", while an empty Detections()
means "was included in labeling efforts but nothing was present".
My suggestion would be to either:
dataset.exists("detections")
to grab a view that contains only samples with non-None ground truth, and then evaluate on thatNone
-valued ground truths with Detections()
, and then you can evaluate on the entire datasetThat is a great point @brimoor 🙌
I will update my detection import process so that samples that don't get any labeled Detections will have an empty one assigned. I am using the LabelBox Import function... it would be interesting to add an optional argument to the import_from_labelbox() function that would create an empty Detections or Classification for samples that get returned without one. Here is an example of a JSON item from a LabelBox export that doesn't have any Labels:
{
"ID": "ckou3mobp00003c5si8rptn43",
"DataRow ID": "ckou3j0qe9zix0yasg4zbai6i",
"Labeled Data": "https://storage.labelbox.com/",
"Label": {},
"Created By": "lberndt@iqt.org",
"Project Name": "jsm-test",
"Created At": "2021-05-18T13:55:40.000Z",
"Updated At": "2021-05-18T13:55:53.000Z",
"Seconds to Label": 7.335000000000001,
"External ID": null,
"Agreement": -1,
"Benchmark Agreement": -1,
"Benchmark ID": null,
"Dataset Name": "jsm-test",
"Reviews": [],
"View Label": "https://editor.labelbox.com?project=",
"Has Open Issues": 0,
"Skipped": true
}
In case it is useful for anyone, here is a little snippet for adding an empty detections to the samples in a view:
view = dataset.match_tags("eval")
for sample in view:
if sample["detections"] == None:
sample["detections"] = fo.Detections(detections=[])
sample.save()
@luke-iqt ah I see, thanks for sharing more details about your import workflow. I like your suggestion for handling None vs empty when importing from annotation vendors: https://github.com/voxel51/fiftyone/issues/1113.
System information
fiftyone --version
): fiftyone v0.9.3, Voxel51, Inc.Commands to reproduce
As thoroughly as possible, please provide the Python and/or shell commands used to encounter the issue. Application steps can be described in the next section.
Describe the problem
The evaluate_detections() expects the ground truth field to be present on every sample and have detections. However, I am working with a dataset where the object I am trying to detect is rare. I would like to have images without the object to try and better evaluate false positives. When the evaluate_detections() function runs into a sample that has the gt_field set to None it crashes with the following error:
The sample it failed on is:
What areas of FiftyOne does this bug affect?
App
: FiftyOne application issueCore
: Corefiftyone
Python library issueServer
: Fiftyone server issueWillingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the FiftyOne codebase?