akshitac8 / OW-DETR

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer
232 stars 39 forks source link

the results of task1 #16

Closed aooating closed 1 year ago

aooating commented 2 years ago

In Task1, my current class AP50 was 73.0, which was quite different from the results in the paper, and the U-recall was only 4.4. I don't know why. Can you upload the trained weight file?

ghost commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs:

Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538

Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

aooating commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs:

Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538

Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

45 epochs

aooating commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs:

Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538

Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

Did you train the results on the experiment iOD? I can't get similar results.

akshitac8 commented 2 years ago

Hello @orrzohar-stanford @aooating The paper uses 2 open-world splits and I have provided config files for both the splits. Can you specify which split is causing the problem?

aooating commented 2 years ago

Hello @orrzohar-stanford @aooating The paper uses 2 open-world splits and I have provided config files for both the splits. Can you specify which split is causing the problem?

/data/OWDETR/VOC2007/ImageSets/t1_train.txt

akshitac8 commented 2 years ago

The results for these splits are present here -> https://arxiv.org/pdf/2112.01513.pdf in Table 6

aooating commented 2 years ago

The results for these splits are present here -> https://arxiv.org/pdf/2112.01513.pdf in Table 6

As for the results of 19+1 setting in Table 2, I only got mAP 61. I don't know the reason.

akshitac8 commented 2 years ago

for that their are a lot of HP changes involved. You can experiment with a few like change in Lr, epochs, finetuning-epochs, finetuning LR.

slcheng97 commented 2 years ago

I wonder if batch size has a great impact on the experimental results? I used two 3090ti to train the model (which means that the batch size is 4). When I trained 30 epochs, the evaluation results of the model are as follows: image It is too far from the results reported in the paper. What may be the reason for this,what may be the reason for this?

slcheng97 commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs:

Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538

Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

Hello, how many GPUs did you use for training? Is it V100? Could you share the log files of the whole training process?

aooating commented 2 years ago

In the iOD experiment, is the split of VOC dataset, such as ft.txt, different from the random selection?

aotingzh

@. | 签名由网易邮箱大师定制 On 8/5/2022 23:14,Akshita @.> wrote:

for that their are a lot of HP changes involved. You can experiment with a few like change in Lr, epochs, finetuning-epochs, finetuning LR.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

ghost commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs: Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538 Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

Hello, how many GPUs did you use for training? Is it V100? Could you share the log files of the whole training process?

Hey @chengsilin, I used 8 V100s. I sent the logs to your email

slcheng97 commented 2 years ago

Hey, I actually got very simular results to yours after 45 epochs: Current class AP50: tensor(71.8500) Current class Precisions50: 5.193744085759178 Current class Recall50: 87.56551145946715 Known AP50: tensor(71.8500) Known Precisions50: 5.193744085759178 Known Recall50: 87.56551145946715 Unknown AP50: tensor(0.0407) Unknown Precisions50: 0.4460528530550413 Unknown Recall50: 3.906291129244538 Did you evaluate after 45 epochs like in the codebase or 50 epochs like in the original publication?

Hello, how many GPUs did you use for training? Is it V100? Could you share the log files of the whole training process?

Hey @chengsilin, I used 8 V100s. I sent the logs to your email

Thanks for your reply @orrzohar-stanford Here, I still have a question: does it true that you use create_imagenets_t1.py file to generate the test sample corresponding to task 1 during the test stage, instead of directly using the test.txt file provided by the code base?

ngthanhtin commented 2 years ago

I wonder if batch size has a great impact on the experimental results? I used two 3090ti to train the model (which means that the batch size is 4). When I trained 30 epochs, the evaluation results of the model are as follows: image It is too far from the results reported in the paper. What may be the reason for this,what may be the reason for this?

Hi @chengsilin , could you share your 30-epoch pretrained model? This really helps. Thanks in advance.

ngthanhtin commented 2 years ago

Hi @chengsilin, @akshitac8 I am Tin who is a CS student, I really appreciate your research and training work that is really helpful in the AI community. For now, I really need a pre-trained model for my research, because your work is one of a few models that can get the unknown boxes. So it would be really kind of you if you could share it with me, please. My email is ngthanhtinqn@gmail.com if you want to share it privately.

Best regards, Tin

akshitac8 commented 1 year ago

Hello, I have uploaded the weights for the code in the repo please let me know if the weights are still a problem as I recently changed countries and have limited access to old machines. When the number of gpus are changed please make sure to change other hyper parameters accordingly because if not scaled properly can lead to very bad results.

ngthanhtin commented 1 year ago

Thank @akshitac8 for providing us with the weights for the code. Now, I can use it for my own research! 😄 .

Best regards, Tin