Open Zhong-Zi-Zeng opened 1 year ago
That's also my confusion. I guess instead of using random values, the embedding weights was used and reshaped. Maybe it's the same. But is it trainable?. I'll appreciate an answer if you've gotten the answer
I have implemented DETR and found that embedding weights is more convenient than random values when building a model.
I have implemented DETR and found that embedding weights is more convenient than random values when building a model.
Thanks for helping.
Assuming:
embedding=nn.Embedding(45, 2)
weights=embedding.weight.unsqueeze(1).repeat(1,3)
Did you keep the weights requires_grad=True. This is just my confusion
Yes, so that the gradient of embedding weight will have the same value.
Yes, so that the gradient of embedding weight will have the same value.
Thanks for your answer. I should keep requires_grad set to 'True.' That means it will also be trained during backpropagation, and the value will change. This applies to the embedding weights, specifically the query embedding.
Yes, so that the gradient of embedding weight will have the same value.
I think it's in (num_queries, batch_size, dim) and not (batch_size, num_queries, dim)
You are right!
I have implemented DETR and found that embedding weights is more convenient than random values when building a model.
@Zhong-Zi-Zeng What do you mean with more convenient? Is it that the results are better? Because, as shown in the DETR colab notebook, if you use, nn.Parameter(torch.rand(100, hidden_dim))
as the queries with 100 being the num_queries and update all parameters of the model, it should still work because the nn.parameter
has requires_grad=True
by default, so it will still update the parameters, right? It will not be random values thereafter.
The last page of the original paper shows a simple code for DETR. The decoder's input is just a random value of size (100, 256). However, on your GitHub, I can't understand what you did about the object query. Why did you use the embedding layer's weight instead of a random value?