Open knightyxp opened 3 years ago
@knightyxp In the ScalePyramidModule class, defined in M-SFANet.py, there is self.can = ContextualModule(512, 512) as the CANet branch. If the CAN module is deducted, the reported performance was 62.41 (MAE) on SHA and 7.40 (MAE) on SHB. Please ensure to include the module in your forward pass as well.
whether this means just add can in sfan is not suitable
what is more, in your exp M-SFANet w/o CAN(this means just sfan) the MAE on SHA is 62.41, however the sfan on sha could achieve 59(my exp) reported 60 , so i do not know how u get this res
@knightyxp I see. M-SFANet w/o CAN means deducting self.can in ScalePyramidModule. Can I have your training code? It's hard to notice the difference in implementation.
same as train.py in sfanet
Thank you. I have seen your codes and spotted some inconsistencies in your implementation:
(1) I did not use the SSM loss. (2) I did not use Adam. In the SHA experiment, I used LookaheadAdam(model.parameters(), lr=5e-4) (See ./models). (3) Please train up to 1000 epochs, not 500, because the M-SFANet model is more complex in terms of #params. (4) The reproduced weights of M_SFANet (SHA) with MAE=59.69 and MSE=95.64 are provided via the google drive link. So you can check the saved epoch and the optimizer's state_dict.
P.S. I cannot check your preprocessing code, which is also important.
can not load your pretrain_pth
error like this:
Traceback (most recent call last):
File "test.py", line 40, in
@knightyxp The weights are stored in the torch.load(model_path)["model"]. Like this one.
the published code do not have CANet branch, not coordinated with the paper report, the baseline of SFANet is 59 according to my exp, however, when I add ASPP(means using the M-SFANet model according to the author's code) the SHA MAE is only 61, ridiculous, the res in the paper can not be reproduced (i do not know whether the reviewer of ICPR know this thing)