Closed jikerWRN closed 3 years ago
Hello, @jikerWRN . Thank you for your interest in my work.
There was a small confusion during the development on offsets for confidences. After releasing the pre-trained models, I fixed the issue but I didn't update the pre-trained model. Thus I put the --legacy flag to support pre-trained models with fixed codes.
(I think you mean \gamma) I tested with various gamma values (> 0) in the supplementary material (available in the ECVA website), but I didn't face any training problem. I think you can add some constraints by yourself in here: https://github.com/zzangjinsun/NLSPN_ECCV20/blob/ba33fa5d9ea62ca970026a145ab18fab76d79d4a/src/model/nlspnmodel.py#L88
Thank you for your suggestion. I will add it in the next update.
Thanks for your great work and elegant code! I have a few questions about code.
In the Affinity Normalization of paper, the parameter \ Lambda has a range (\Lambda_min <= \Lambda <= \Lambda_max), but it is only set as a learnable parameter in the code, i don't find a constraint on it. Does this have any effect on the result?
In the test part of code, i think 'with torch.no_grad()' should be added to prevent memory leaks.
Looking forward to your reply, Thanks!