Closed Ehteshamciitwah closed 2 years ago
Please feel free to contact us.
Please feel free to contact us.
Thank you for your awesome work.
While exploring your code. I realized that you and adding all losses (meta loss+base loss+final loss) to get a total loss. I am a little bit confused why you use the base loss to update the parameter of meta learner. Because Base loss is independent to the meta learner. Thank you
Hi, in our earlier version, two learners are jointly trained with the ensemble module, so all losses are added up.
But now, only the meta learner and the ensemble module need to be updated. The base loss does not affect either of them, since there is no shared part to be optimized.
Regards, Chunbo
Thank you for your quick response.
Is this repository represent your earlier version or actual BAM implementation.? As in code all losses = Meta+base+Final
if this repository represents the implementation of BAM (separated trained). To get the paper results, do I need to add a base learner to the total loss for the training of meta learner. Secondly, although the base learner is not updating during meta training how adding base loss in meta training will not affect the total loss of meta.
Thank you for your responses. As you are training in two steps.
Hi, I seem to understand your concern.
The output of the base learner regarding this class is removed
when training the meta-learner and the ensemble module, please refer to here.
During the meta-testing phase, all output channels are valid.
Thank you to get my point and explaining it. it really clears my understanding. as it was not explained on paper. So that's why I was confused. Thank you very much.
You're welcome, feel free to contact us~!
Thank you for your continuous guidance.
In the paper, What is the reason to use PSP variant in base learner and PFE variant in Meta Leaner? Although we have more powerful meta-frameworks ( HSNet,VAT) and Base framework respectively. Secondly, why not use google inception or other backbones for the encoder. Any ablation study ? Thank you
In fact, it has to do with the time we embarked on this project when HSNet was just being proposed and most of the work was based on PFENet. As for base learners, PFENet builds models largely based on PSPNet, which is convenient for us to use.
We believe that stronger base/meta learners lead to more performance gains. Currently, we are conducting related experiments and may subsequently mention it in a more complete version of this work.
Looking forward to your further attention.
Regards,
Looking forward to your next version. best of luck.
I had an issue with implementation. But it is solved so I close the issue.