fredzzhang / pvic

[ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"
BSD 3-Clause "New" or "Revised" License
67 stars 8 forks source link

data set #61

Open LUYUuuum opened 1 day ago

LUYUuuum commented 1 day ago

Hi, when I want to use my own data instead of vcoco for training, where should I change in the code first, I can't find all the changes by myself, looking forward to your reply!

LUYUuuum commented 1 day ago

Screenshot 2024-11-21 145446

fredzzhang commented 6 hours ago

Hi @LUYUuuum,

You need to implement your own dataset class, similar to this one. The __getitem__ method is probably the most important one there. You need to make sure your dataset returns the data in the same format. In addition to this, you need to add a correspondence table between object classes and the target HOI classes, similar to this.

I think that should be it. You should be able to other details in the codebase as well.

Cheers, Fred.

LUYUuuum commented 4 hours ago

Hi, I'm glad to receive your reply, but I still can't read the annotations after the modification, I would like to know if the model has a minimum requirement for the resolution and number of datasets? WARNING: Collected results are empty. Return zero AP for class 0. WARNING: Collected results are empty. Return zero AP for class 1. WARNING: Collected results are empty. Return zero AP for class 2. Epoch 2 => mAP: 0.0000. Looking forward to your answer.