Open huydung179 opened 6 months ago
Hi,
I found a code bug. Can you verify it?
In the oneformer_transformer_decoder.py file, line 432:
oneformer_transformer_decoder.py
feats = self.pe_layer(mask_features, None) out_t, _ = self.class_transformer(feats, None, self.query_embed.weight[:-1], self.class_input_proj(mask_features), tasks if self.use_task_norm else None)
I think you used the positional embedding as the source features. The forward method of that self.class_transformer is:
self.class_transformer
def forward(self, src, mask, query_embed, pos_embed, task_token=None): ...
I think it should be:
feats = self.class_input_proj(mask_features) out_t, _ = self.class_transformer(feats, None, self.query_embed.weight[:-1], self.pe_layer(mask_features, None), tasks if self.use_task_norm else None)
Hi,
I found a code bug. Can you verify it?
In the
oneformer_transformer_decoder.py
file, line 432:I think you used the positional embedding as the source features. The forward method of that
self.class_transformer
is:I think it should be: