mkara44 / unetr_pytorch

MIT License
21 stars 4 forks source link

The placement of the Normalization Layer. #1

Open VdaylightV opened 2 months ago

VdaylightV commented 2 months ago

Hello. I find the implementation of TransformerEncoderBlock in unetr_pytorch/utils/transformer.py is different from the code in the Paper "UNETR: Transformers for 3D Medical Image Segmentation". Specifically, UNETR first applies normalization and then applies attention or MLP. However, in this repo, the attention of MLP is applied before the normalization layer.

Will these difference influence the performance of models?

princerice commented 2 months ago

Hello. I find the implementation of TransformerEncoderBlock in unetr_pytorch/utils/transformer.py is different from the code in the Paper "UNETR: Transformers for 3D Medical Image Segmentation". Specifically, UNETR first applies normalization and then applies attention or MLP. However, in this repo, the attention of MLP is applied before the normalization layer.

Will these difference influence the performance of models?

Hello, I would like to ask whether this code is reproduced successfully and how is the effect

mkara44 commented 2 months ago

Hi guys, this is an old repo and it is not maintained right now. Therefore, it is not possible to answer your questions easily. However, I can say that, the repo works succesfully. If you think that the implementation is wrong in the normalization part, you can easily test it with the correct one. I will be happy to update my code, if it works better.