Closed luckychay closed 1 year ago
I actually trained using their scripts but could not recreate their results. For T1 (after 50 epochs):
Prev class AP50: tensor(43.3941) Prev class Precisions50: 5.411373414254498 Prev class Recall50: 71.44982028560455 Current class AP50: tensor(22.2871) Current class Precisions50: 1.8076766423054116 Current class Recall50: 57.50498649613736 Known AP50: tensor(32.3129) Known Precisions50: 3.519432608981228 Known Recall50: 64.12878254613427 Unknown AP50: tensor(0.0863) Unknown Precisions50: 0.9169071669071669 Unknown Recall50: 7.730652247380871
AND one thing to note is that they continue numbering the epochs across tasks, so for the second one it will resume at epoch 50 and train for an additional 50 epoch
Yeah, I did notice that they continue numbering the epochs across tasks. But even though in that case, theirs scripts are apparently different from what discribed in the paper.
Hello @luckychay @orrzohar-stanford The paper use 2 open-world splits and I have updated the repo with both splits configs. Can you please let me know which split config is causing the problem?
Dear authors, Thank you for responding! I have been using the new proposed data splits. After training the model for 40 epochs + 10 fine-tuning, I am getting results closer to what was reported - but still a little off. I am not sure why - I used the (unmodified) bash scripts provided (trained on a similar machine with 8 V100 GPUs/etc). Any reason you can think of?
<html xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
Task IDs | Task 1 | -- | -- | -- | U-Recall | mAP ORE-EBUI | 1.5 | 61.4 Ours: OW-DETR | 5.7 | 71.5 Original codebase | 3.9 | 71.85 Amended (40+10) | 5.05 | 71.9
Dear Author,
In the paper, I see that every task is trained for 50 epochs and finetuned for 20 epochs as
However, in
configs/OWOD_new_split.sh
, I see the training schedule is following a different setting as highlighted by the red boxes.Is there anything I missed? Looking forward to your reply. Thanks.