Open sajidahmed12 opened 1 year ago
This is precisely what I want to do as well! :) +1
I don't think it will be hard (all the metadata I need is there to hammer out a re-usable script) for my dataset and my use, but it would be nice to click a button and have it process the video for me in a format that is popular :)
Hi, I also want this feature. I don't want to use outsourced material to achieve this tiny problem. Here are some issues discussed for the same problem.
https://github.com/heartexlabs/label-studio/issues/3405
https://github.com/deepinvalue/video_annotations_to_yolo
However, I found this "enable" parameter in an official document. Does this mean Label Studio automatically interpolates video annotation?
Whether the consequent frames interpolation is toggled on / off (for example, to label occlusion)
@shure-dev Basically, yes. When you label something, it will interpolate to the end of the video automatically. You then click the enable/disable button for the keyframe, and you can turn off interpolation.
In the output, they're telling us through the enabled
attribute, "Hey, between this keyframe and the next, it was intended by the labeler that the bounding box is interpolated." so as the person using this label output, you'll know what to do.
When it is enabled: false, it means that this one frame has this bounding box, and that is all (ie. when something pops off the screen and back on - you wouldn't want interpolation for that).
+1
@KeithHanson
Hi, thank you for your comment
Could you tell me how to use YOLO easily without massive effort with Label Studio? I tried to search for information on the Internet, however, I couldn't find a practical & specific way to annotate a video and use it for YOLO Should we use this ML backend feature? But it seems complicated and heavy. https://labelstud.io/guide/ml.html#Quickstart-with-an-example-ML-backend
Can we annotate on Label Studio with a custom ml model easily without the ML-backend feature??
I think there must be an easy way to use YOLO on Label Studio, for video data.
On the other hand, there is an automatic annotation feature in CVAT, we can just upload ML model https://opencv.github.io/cvat/docs/manual/advanced/automatic-annotation/
Should we use CVAT??
This is my exact problem too. Thinking about using CVAT anyway.
+1
+1
Similar issue https://github.com/HumanSignal/label-studio/issues/3405
I was trying to annotate bounding boxes in a video and trying to export the annotations into YOLO format {images, labels.txt}
Describe the solution you'd like I would like to annotate videos and draw bounding boxes similar to what we do while using images. And I would like to export the data into YOLO format and possibly COCO format as mentioned earlier.
Added a picture to showcase the current scenario of l;abel-studio for video annotation data export.