Closed Andre1998Shuvam closed 2 years ago
Hi @Andre1998Shuvam,
Thanks for taking an interest in our work.
I processed the short video into individual frames and then ran the model on all of them to get the detected human-object pairs. I didn't release that script because it's very messy, not really designated for general use. If you want to try it yourself, you can adapt from inference.py. Note that depending on the actions, you'll need to adjust the action score threshold to make sure there are neither too many nor too few detections. Afterwards, you should be able to find some free software to generate a .gif
file from images.
I might implement a general script in the future. But since my primary focus is still on research, I can't say when that will happen.
Fred.
Ok. Thank you! How did you get the name of the objects and write the activity along with the object on top of the bounding boxes? Like "holding a remote"? Thanks again!
It depends on the dataset the model is trained on.
The object types are the same as 80 objects in MS COCO. For the HICO-DET model, there are 117 actions types. The names of the objects and actions are accessible in the dataset class HICODet
under HICODet.objects
and HICODet.verbs
.
Fred.
Ok. Is there any way to find out which box is indicating which object?
I'm not sure what you mean. Each bounding box has its own coordinates and an object class. After overlaying the box onto an image, it should be clear which object it corresponds to.
Ok. So, each bounding box will have an object class and that object class will be one of the classes of the HICODET dataset, right?
Yes, that's true. And there are 80 object classes in total, same as MS COCO.
Ok. Thank you very much!
Hello! Thank you for this amazing work! I am curious to know how you got the inference results showing the names of the objects and the activities on the demo_friends.gif. Can you please tell how you achieved that? Thanks in advance.