tengshaofeng / ResidualAttentionNetwork-pytorch

a pytorch code about Residual Attention Network. This code is based on two projects from
678 stars 165 forks source link

about the code "out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1" #23

Open ZachZou-logs opened 5 years ago

ZachZou-logs commented 5 years ago

hello ,thank you for your code! But I have a question about your code.The episode in your code seems to be no such operation in the paper and in the soft mask branch only skip connection have addition operation.Could you help me solve this question? out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1

tengshaofeng commented 5 years ago

I refer to the caffe version. u can consider it as a trick.

daijiahai commented 4 years ago

hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?

jorie-peng commented 3 years ago

hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?

I change to another model, it works

jorie-peng commented 3 years ago

I refer to the caffe version. u can consider it as a trick.

hi, thanks for your code, when I use 'ResidualAttentionModel_92_32input_update', I will have question as this issus described because of 'AttentionModule_stage1_cifar', but change model to 'ResidualAttentionModel_92', it work, but I cannot pretrained gived model because of lots of mismatch, do you have a good way to load gived model? or is any other pretrained model can use? Thanks!