-
### resources to consider
- https://pytorch-geometric.readthedocs.io/en/1.3.1/_modules/torch_geometric/nn/glob/attention.html
-
# Description:
Hello! I appreciate the excellent work on benchmarking Performer and Longformer against the base Transformer. I’d like to propose the implementation of additional efficient Transformer…
-
Hi,
Thank you for this great work and code. I want to ask could you please refer me the lines in the code where you are calculating the attention as shown in equation 1 of the paper? Actually I want …
ans92 updated
2 years ago
-
Dear author, thank you for providing such a lightweight reinforcement learning library. Currently, I am hoping to integrate your attention mechanism into other reinforcement learning algorithms. I enc…
-
WDYT? Is this publication in scope?
```
@article{He_2024,
author = {He, Pengfei and Zhang, Ying and Gan, Han and Ma, Jianfei and Zhang, Hongxin},
doi = {10.1016/j.compeleceng.2024.109515},
issn = {…
-
Large language models (LLMs) have been popular for many years, yet there is still no dedicated attention operator/function in the standard ONNX specification.
Previous attempts to include an attent…
-
nice work and highly configurable.
Is there a plan to increase the implementation of the attention mechanism?
-
Hello author, I am a beginner in the YOLO model and would like to ask you some questions. In your article, you mentioned the CPAM module you used, and also mentioned that you used other attention modu…
-
This is an excellent work, when will the code be open sourced?
-