-
In the original paper the authors suggest adding positional encodings to speech and text representations before the transformer block. I noticed that in your code positional encodings are commented. H…
-
# Positional encoding
From paper _Attention is all you need_ is required to implement this feature in order con contibute to the **_Transformers_** milestone.
## Refferences
* [Attention Is A…
-
## Description:
Hello! I’ve been following the development of this repository and appreciate the efforts to benchmark various efficient Transformer variants. I’d like to propose the implementation of…
-
Entender, explicar e implementar la pieza "Positional Encoding" de la arquitectura Transformer.
-
Hi! I am using [this example](https://open-metric-learning.readthedocs.io/en/latest/postprocessing/siamese_examples.html) to train a ConcatSiamese model and if I use an extractor from example (`vits16…
-
### Issue Type
Bug
### Source
source
### Keras Version
Keras 2.14
### Custom Code
Yes
### OS Platform and Distribution
_No response_
### Python version
_No response_
### GPU model and memo…
-
Entender, explicar e implementar la pieza "Positional Encoding" de la arquitectura Transformer.
-
According to lightSANs paper, there needs to be a mechanism to encode both items and positions (using item embeddings and position embeddings), but in the lightSANs.py file the positional encoding i…
-
Hi,
I want to take a sentence-transformer model( say xlmr) and extend its context length using rope. How to do this? Can you provide a code for this ?
-
Hello,
Thanks for your great work, an efficient and neat transformer framework is essential for low-level vision I think.
According to your work, I tried discard the attention mask and positiona…