This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).
I was reading the paper and checking the code and I can't see when you add value to the patch embbedings, I was debugging the code and in this part I only see you create a zero tensor and after on forward you only add this tensor.
In which moment you give a value to the patch embeddings?
I was reading the paper and checking the code and I can't see when you add value to the patch embbedings, I was debugging the code and in this part I only see you create a zero tensor and after on forward you only add this tensor. In which moment you give a value to the patch embeddings?
line 157 https://github.com/TACJu/TransFG/blob/master/models/modeling.py#L157 self.position_embeddings = nn.Parameter(torch.zeros(1, n_patches+1, config.hidden_size))
Line 173 embeddings = x + self.position_embeddings