Closed aga-relation closed 1 year ago
Hi,
Thank you for your your question!
That is correct - we aren't using any positional encodings. There wasn't a strong reason for not using positional encodings, just the fact that we had a few convolutional layers before the transformer encoder layer.
Best, Eeshit
Hi,
If I am reading your code correctly, you are not using any positional encodings - is that right? Any reasons for it? Thank you! :)