Closed Youngluc closed 1 year ago
good work! But I have some questions about the implementation of label-wise embedding decoder. In the paper, it is described as "The label-wise embedding decoder consists of a standard self-attention block and a cross-attention block ", and I find that the default parameters of num_decoder_layers is equal to 2 in encoder.py, and this parameters is not changed in the subsequent definition. Could you provide the specific implementation of label-wise embedding decoder?
Hope that I have understood your question. This may be a naming mistake. The file encoder.py contains all the implementation details of the label-wise embedding decoder. Maybe the file should be renamed as le_decoder.py. Very sorry for the mistake and thank you very much!
Thanks for your reply!
good work! But I have some questions about the implementation of label-wise embedding decoder. In the paper, it is described as "The label-wise embedding decoder consists of a standard self-attention block and a cross-attention block ", and I find that the default parameters of num_decoder_layers is equal to 2 in encoder.py, and this parameters is not changed in the subsequent definition. Could you provide the specific implementation of label-wise embedding decoder?