JosephKJ / OWOD

(CVPR 2021 Oral) Open World Object Detection
https://josephkj.in
Apache License 2.0
1.04k stars 155 forks source link

[resolved] Can anyone reproduce the results? (my results attached) #26

Closed ShuoYang-1998 closed 3 years ago

ShuoYang-1998 commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

JosephKJ commented 3 years ago

Hi @ShuoYang-1998,

I am very sure that it is just a matter of some hyper-parameter that is causing the discrepancy. The code tip is one that I used to run a lot of ablations towards the end of the paper and for the rebuttal. I just need to find that rouge hyper parameter that is causing the issue. One other point to note from your reported results is that ORE is better than your reproduced Faster-RCNN+FT in most of the cases.

As @salman-h-khan said, I had an international travel on Saturday. Unfortunately enough, I am tested positive for COVID today. I have developed minor complications and hence am admitted to a hospital now. I am typing this message from the hospital room. Kindly give me some time to regain my health, if possible.

Thanks, Joseph

ShuoYang-1998 commented 3 years ago

wish you all good

tmp12316 commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

Hi Yang,

I have a question about the experiments of EBMs.

Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.

Hrqingqing commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

Hi,Yang You have reproduced the results so quickly, and I would like you to give me some advice. I can't send messages to you email address, if it is convenient for you, could you give me some guidance.Thank you, looking forward to your reply.

ShuoYang-1998 commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

Hi Yang,

I have a question about the experiments of EBMs.

Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.

The EBUI does use all known and unknown data to learn a distribution, but it doesn't access the labels.

ShuoYang-1998 commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

Hi,Yang You have reproduced the results so quickly, and I would like you to give me some advice. I can't send messages to you email address, if it is convenient for you, could you give me some guidance.Thank you, looking forward to your reply.

I have uploaded my run.sh in issue 18, please refer.

tmp12316 commented 3 years ago

Does anyone successfully reproduce the results? I ran several times but the results are far away from the author's. My results are attached. image

Hi Yang, I have a question about the experiments of EBMs. Is the validation used for fitting EBMs distribution using known and unknown labels? I wonder if the open-set settings will degenerate into few-shot settings after validation because unknown classes (UUCs) with ground-truth labels shouldn't appear in the process except for testing.

The EBUI does use all known and unknown data to learn a distribution, but it doesn't access the labels.

Hi, Yang,

Thank you for your reply. I have checked train_loop.py and modeling/roi_heads/roi_heads.py, and find EBUI should have used the unknown annotations, as shown in the followed codes. The gt labels of the unknown instances will be allocated to the region proposals in roi_heads.py and are saved to fit the unknown WB distribution. Is it right?

wb_unk = Fit_Weibull_3P(failures=unk, show_probability_plot=False, print_results=False)

def compute_energy(self, predictions, proposals): gt_classes = torch.cat([p.gt_classes for p in proposals]) logits = predictions[0] data = (logits, gt_classes) location = os.path.join(self.energy_save_path, shortuuid.uuid() + '.pkl') torch.save(data, location)

I also find EBUI can work along, which means the unknown labels are not from ALU, but from gt.

image

tmp12316 commented 3 years ago

d unknown data to learn a distribution, but it doesn't access the labels.

Hi, Yang,

Do you mean the specific labels?

I think that for the open-set settings, we shouldn't use unknown samples for training. In this validation, we need to learn(fit) the distribution and save the parameters, which should be seen as training with some tricks.

For MINIST, we cannot set 0-6 with real labels and set the rest are labeled unknown. In OWOD, I am not sure whether using Known and unknown labeled samples to fit the distribution is OK? Do you think it is OK?

ShuoYang-1998 commented 3 years ago

d unknown data to learn a distribution, but it doesn't access the labels.

Hi, Yang,

Do you mean the specific labels?

I think that for the open-set settings, we shouldn't use unknown samples for training. In this validation, we need to learn(fit) the distribution and save the parameters, which should be seen as training with some tricks.

For MINIST, we cannot set 0-6 with real labels and set the rest are labeled unknown. In OWOD, I am not sure whether using Known and unknown labeled samples to fit the distribution is OK? Do you think it is OK?

I also have the same concern, some other people also raise this questions in [https://github.com/JosephKJ/OWOD/issues/16] (url) and https://github.com/JosephKJ/OWOD/issues/8. But the author didn't response.

LoveIsAGame commented 3 years ago

@JosephKJ @ShuoYang-1998 @wyman123 @Hrqingqing @salman-h-khan What is the order in which I should use the configuration files in the T1 ~ T4 experiments?Looking forward to your reply!! 图片 Hope to get your detailed analysis of the configuration file!!

JosephKJ commented 3 years ago

@ShuoYang-1998 : Thank you very much for helping out others. I have added 'replicate.py' to replicate results from the pertained models shared before. You can find the binaries and logs here, if you want to verify the authenticity of the results.

Please find my results below:

Replicated Results

@wyman123 : We are using 4000 kept aside validation data-points for learning the Weibull distribution. This is a tiny fraction when compared to 414,412 training data-points.

@LoveIsAGame: Please refer run.sh

Regarding my late response: https://github.com/JosephKJ/OWOD/issues/35

salman-h-khan commented 3 years ago

@wyman123: Thanks for the question. We use the validation set to fit the Weibull distribution. The validation set for each task consists of 1k images, hence a total of 4k.

Our problem setting demanded a sequential supervision model where unannotated unknown classes are initially observed without labels and are sequentially labelled by the annotator in the subsequent tasks.

You can understand this as a transductive mode of supervision for the small held-out validation set i.e., a small portion of the "unseen" classes data is available as a bag with a single unknown label for the collection of instances.

iFighting commented 3 years ago

@ShuoYang-1998 : Thank you very much for helping out others. I have added 'replicate.py' to replicate results from the pertained models shared before. You can find the binaries and logs here, if you want to verify the authenticity of the results.

Please find my results below:

Replicated Results

@wyman123 : We are using 4000 kept aside validation data-points for learning the Weibull distribution. This is a tiny fraction when compared to 414,412 training data-points.

@LoveIsAGame: Please refer run.sh

Regarding my late response: #35

The result from the pertained models in your figure is not consistent with the result in your paper。 In addition,we still cannot reproduce the result from the training scheduler。 I think you should seriously consider this problem。

JosephKJ commented 3 years ago

The result from the pertained models in your figure is not consistent with the result in your paper。

Kindly let me know why. Most of the number are in the same ballpark, some are even better.

In addition,we still cannot reproduce the result from the training scheduler。

Kindly see #37 . I have fixed it now. Try once from the latest tip. Thanks.

JosephKJ commented 3 years ago

Closing this issue due to inactivity. @dyabel is able to reproduce mAP and A-OSE with the latest code. Kindly reopen for more discussions.

YujunLiao commented 3 years ago

@ShuoYang-1998 Hi! friend, I also tried to reproduce the result but failed, I attach the result in #77 , if you successfully reproduced the result, could you help me, thank you!