Open h-fernand opened 6 months ago
As an update, it appears that this dramatic memory usage occurs no matter how the function is used successfully. The reason it did not eat all of the RAM without the coco_id_field set was because the annotations were created in the wrong order. When fixing the order of the annotations the memory leak occurs. I'm convinced this is a memory leak because my prediction annotation file is only 2GB, there are only 1000 images in the test set that I'm adding predictions to, and the program ends up eating all of the RAM on a system with 256GB of RAM.
@h-fernand this sounds similar to the issue reported in https://github.com/voxel51/fiftyone/issues/4293 which has been resolved in https://github.com/voxel51/fiftyone/pull/4354.
(FYI the above patch will be released in fiftyone==0.24.0
which is scheduled for next week)
That's great news, I'll try the patch out once it's released and hopefully it resolves the issue. I'll post an update in this thread once it's been released.
Describe the problem
When trying to add COCO format instance segmentation prediction data to my dataset using
add_coco_labels
the program will begin rapidly using RAM until eventually it runs out of RAM and crashes. This only happens if I set thecoco_id_field
tococo_id
so that I can sync up my annotations with my samples properly. If I omit thecoco_id_field
and let the function run with the default behavior, my annotations get mismatched but the program does not eat nearly as much RAM and actually does finish running. This code also produces the same erroneous behavior if I provideadd_coco_labels
with a view containing only the test data split instead of the whole dataset.Code to reproduce issue
System information
fiftyone --version
): v0.23.8Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the FiftyOne codebase?