fredzzhang / upt

[CVPR'22] Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"
https://fredzzhang.com/unary-pairwise-transformers
BSD 3-Clause "New" or "Revised" License
144 stars 26 forks source link

Generate the results on the friends.gif #37

Closed Andre1998Shuvam closed 2 years ago

Andre1998Shuvam commented 2 years ago

Hello! Thank you for this amazing work! I am curious to know how you got the inference results showing the names of the objects and the activities on the demo_friends.gif. Can you please tell how you achieved that? Thanks in advance.

fredzzhang commented 2 years ago

Hi @Andre1998Shuvam,

Thanks for taking an interest in our work.

I processed the short video into individual frames and then ran the model on all of them to get the detected human-object pairs. I didn't release that script because it's very messy, not really designated for general use. If you want to try it yourself, you can adapt from inference.py. Note that depending on the actions, you'll need to adjust the action score threshold to make sure there are neither too many nor too few detections. Afterwards, you should be able to find some free software to generate a .gif file from images.

I might implement a general script in the future. But since my primary focus is still on research, I can't say when that will happen.

Fred.

Andre1998Shuvam commented 2 years ago

Ok. Thank you! How did you get the name of the objects and write the activity along with the object on top of the bounding boxes? Like "holding a remote"? Thanks again!

fredzzhang commented 2 years ago

It depends on the dataset the model is trained on.

The object types are the same as 80 objects in MS COCO. For the HICO-DET model, there are 117 actions types. The names of the objects and actions are accessible in the dataset class HICODet under HICODet.objects and HICODet.verbs.

Fred.

Andre1998Shuvam commented 2 years ago

Ok. Is there any way to find out which box is indicating which object?

fredzzhang commented 2 years ago

I'm not sure what you mean. Each bounding box has its own coordinates and an object class. After overlaying the box onto an image, it should be clear which object it corresponds to.

Andre1998Shuvam commented 2 years ago

Ok. So, each bounding box will have an object class and that object class will be one of the classes of the HICODET dataset, right?

fredzzhang commented 2 years ago

Yes, that's true. And there are 80 object classes in total, same as MS COCO.

Andre1998Shuvam commented 2 years ago

Ok. Thank you very much!