voxel51 / fiftyone

Refine high-quality datasets and visual AI models
https://fiftyone.ai
Apache License 2.0
8.81k stars 557 forks source link

[BUG] Fiftyone 0.25.2 is not able to detect the eval keys while loading the dataset #4882

Closed Yuvraj-Dhepe closed 2 weeks ago

Yuvraj-Dhepe commented 3 weeks ago

Describe the problem

Hello team, Thank you for a great open source tool. I am facing an issue with the latest version with fiftyone while using evaluate segmentation. On brief, I perform evaluations using evaluate_segmentation function of fo.Dataset, it works as intended, generating evaluations, and then exporting this evaluated dataset works well too. But the problem is when I load this dataset again afresh, in another jupyter nb and do dataset.list_evaluations() an empty list is returned, while it should have evaluation keys.

Code to reproduce issue

System information

Other info/logs

Nothing else, but if someone needs any further information let me know, I can try to provide it.

Willingness to contribute

The FiftyOne Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the FiftyOne codebase?

brimoor commented 3 weeks ago

Hmm, I'm not able to reproduce with the code below, which I believe is analogous to what you're describing. Can you try it yourself and let me know what you find?

import fiftyone as fo
import fiftyone.zoo as foz
import fiftyone.utils.labels as foul

dataset = foz.load_zoo_dataset(
    "coco-2017",
    split="validation",
    label_types="segmentations",
    classes=["person", "cat", "dog"],
    label_field="instances",
    max_samples=25,
    only_matching=True,
)

mask_targets = {1: "person", 2: "cat", 3: "dog"}

foul.objects_to_segmentations(
    dataset,
    "instances",
    "gt_segmentations",
    mask_targets=mask_targets,
    output_dir="/tmp/segmentations",
)

dataset.clone_sample_field("gt_segmentations", "pred_segmentations")

dataset.evaluate_segmentations(
    "pred_segmentations",
    gt_field="gt_segmentations",
    mask_targets=mask_targets,
    eval_key="eval",
)

print(dataset.list_evaluations())
# ["eval"]

dataset.export(
    export_dir="/tmp/fod",
    dataset_type=fo.types.FiftyOneDataset,
)

dataset2 = fo.Dataset.from_dir(
    dataset_dir="/tmp/fod",
    dataset_type=fo.types.FiftyOneDataset,
)

print(dataset2.list_evaluations())
# ["eval"]

results = dataset2.load_evaluation_results("eval")
results.print_report()
              precision    recall  f1-score   support

      person       1.00      1.00      1.00 1057138.0
         cat       1.00      1.00      1.00  234226.0
         dog       1.00      1.00      1.00   32373.0

    accuracy                           1.00 1323737.0
   macro avg       1.00      1.00      1.00 1323737.0
weighted avg       1.00      1.00      1.00 1323737.0
Yuvraj-Dhepe commented 2 weeks ago

@brimoor Thank you for your solution. Apologies, it was my mistake. The thing is after loading the dataset, I was cloning it using .clone() and hence the evaluations were gone. N you didn't have that here, so that's why the eval keys remained present.

brimoor commented 2 weeks ago

FYI- cloning an entire dataset should also preserve evaluation keys, as demonstrated below.

If you clone a view into a dataset, then evaluations and other such things will NOT be included.

import fiftyone as fo
import fiftyone.zoo as foz

dataset = foz.load_zoo_dataset("quickstart")
dataset.evaluate_detections("ground_truth", eval_key="eval")

print(dataset.list_evaluations())
# ['eval']

dataset2 = dataset.clone()

print(dataset2.list_evaluations())
# ['eval']

dataset3 = dataset.view().clone()

print(dataset3.list_evaluations())  
# []