Hi @tengshaofeng ,thanks ,But I have a question,in attention_module.py ,the class AttentionModule_stage0 inputsize is 112112,but in the class AttentionModule_stage1 the inputsize is 5656,is any maxpool layer used in the middle?I think it's not mentioned in the paper.
Hi @tengshaofeng ,thanks ,But I have a question,in attention_module.py ,the class AttentionModule_stage0 inputsize is 112112,but in the class AttentionModule_stage1 the inputsize is 5656,is any maxpool layer used in the middle?I think it's not mentioned in the paper.