Closed pietrolesci closed 9 months ago
Hi @pietrolesci, thanks for raising this issue.
Indeed, it seems the second mask creation is redundant. It was probably just a copy-pasta that got added along the way. Would you like to open a PR to fix it? This way you get the github contribution for having eagle eyes 🦅 👀
Hi @amyeroberts, thanks for your swift reply and for confirming. Yes, I will open a PR later today :)
Hi @amyeroberts, PR here: https://github.com/huggingface/transformers/pull/27216 :)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Hi there,
I am going through the T5 implementation to try to replicate it and noticed that the two code blocks reported below do the same thing when creating the
encoder_attention_mask
. Is this wanted?https://github.com/huggingface/transformers/blob/037fb7d0e1086146612a716ef914305160134e9c/src/transformers/models/t5/modeling_t5.py#L1029-L1033
https://github.com/huggingface/transformers/blob/037fb7d0e1086146612a716ef914305160134e9c/src/transformers/models/t5/modeling_t5.py#L1045-L1049