TACJu / TransFG

This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).
MIT License
382 stars 88 forks source link

patch embeddings always 0? #3

Closed dcastf01 closed 3 years ago

dcastf01 commented 3 years ago

I was reading the paper and checking the code and I can't see when you add value to the patch embbedings, I was debugging the code and in this part I only see you create a zero tensor and after on forward you only add this tensor. In which moment you give a value to the patch embeddings?

line 157 https://github.com/TACJu/TransFG/blob/master/models/modeling.py#L157 self.position_embeddings = nn.Parameter(torch.zeros(1, n_patches+1, config.hidden_size))

Line 173 embeddings = x + self.position_embeddings

dcastf01 commented 3 years ago

Sorry , this question go into https://github.com/jeonsworld/ViT-pytorch

thanks for your effort and sorry