Closed Zzsf11 closed 3 months ago
Hi, have you tried to reproduce PLOP? How is its performance?
yes, But I only try it on my own dataset. It seems to have similar problem.
If PLOP does not perform well on this dataset, then EWF may also have difficulty in achieving good results on new classes, because EWF is used in conjunction with distillation (EWF emphasizes more on alleviating forgetting, and the improvement for new classes actually appears more in multi-step fine-tuning: less forgetting in the last few stages after training the new class). My suggestion is that you can try using MiB, which is a more relaxed constraint than PLOP. In addition, you can try to adjust the weighted coefficient alpha of EWF for averaging, but this is based on the premise that you can first learn new classes.
Plus, there is a bug that may cause performance differences. You can try to modify it, I hope it can help you. https://github.com/zhangchbin/RCIL/commit/83ec4191b2056b314b5409b701e2e8a0ae0709f6
Thank for your explain! I have attempted on MiB for my dataset: The result is ......( so bad ). As I visualize the incremental class, I've noticed that the primary issue involves the correct masks being assigned to the wrong classes. It seems that some incremental classes are very similar to the base classes, which could be causing the poor results. So I think maybe some methods that enhance classify ability could improve that.
So it seems that MiB is good at judging new classes? Because I see that the main reason for its poor performance is the forgetting problem. Can you try MiB+EWF? This may help a little, but I can't guarantee it.
Hi, I find this work to be both elegant and easy to understand, so I've attempted to apply it to my dataset. However, I've noticed that the incremental class values are close to zero. I also tried to reproduce the VOC results and encountered a similar issue.
I'm quite confused by this and would appreciate any explanations or solutions you might offer.
My dataset result:
VOC result: