NVlabs / SegFormer

Official PyTorch implementation of SegFormer
https://arxiv.org/abs/2105.15203
Other
2.5k stars 349 forks source link

Potential mistake in SegFormer model: `patch_size` argument in SegFormer model not being used. #141

Open jonasdieker opened 9 months ago

jonasdieker commented 9 months ago

Hi there,

first of all thank you for your work and providing all the code! I was looking at the following lines in the SegFormer backbone model:

https://github.com/NVlabs/SegFormer/blob/65fa8cfa9b52b6ee7e8897a98705abf8570f9e32/mmseg/models/backbones/mix_transformer.py#L203-L220

I noticed that the argument patch_size is not actually being used for the OverlapPatchEmbed modules.

Instead you hard coded a patch sizes of [7, 3, 3, 3] for the 4 blocks. While this of course is still smaller than the 16x16 patches in ViT, and thus still lends itself better to detection and segmentation tasks, the model deviates from the paper, where you describe an initial patch size of 4 being used. This also means that classes inheriting from this class do not use the argument at all!

Maybe I am misunderstanding something, so I would be happy if you could shed some light on this potential mistake! Thank you.

hubert10 commented 4 months ago

Hi Jonas, I've also observed something similar and this needs to be clarified either in the paper or in the code above!