-
### 🚀 The feature, motivation and pitch
1. [Exphormer: Sparse Transformers for Graphs](https://arxiv.org/abs/2303.06147)
2. [SGFormer: Simplifying and Empowering Transformers for Large-Graph Represe…
-
Hi Xiaoxin,
I want to express my appreciation for the incredible work you and your team are doing.
But I encounted a problem: When I attempt to run the process on a single GPU, I get the follow…
-
Run transformer block on device OpenCL, output layer on PTX:
```
python %TORNADO_SDK%\bin\tornado ^
--jvm="-Dtb.device=1:0 -Dol.device=2:0 -DUseVectorAPI=true -Dtornado.device.memory=2GB" ^
--clas…
-
### Describe the issue
**Runtime error before training starts. **
Traceback (most recent call last):
File "/workspace/optimum/./examples/onnxruntime/training/language-modeling/run_clm.py", line 671…
-
Hi,
Thank you for your work. Do you think is it possible to run it on Colab?
In the paper you mentioned the following:
`Experiments are conducted using 2 NVIDIA A100-80G GPUs.`
Colab Pro is eq…
-
### Title
Navigating Depression on Social Media: NLP and Cognitive Network Insights
### Leaders
Irene Sánchez Rodríguez
### Collaborators
Liber Dorizzi
Mattia Marzi
Riccardo Vella
### Brai…
-
### 🐛 Describe the bug
The following code generates the compile error below:
```
import code
import time
import warnings
import numpy as np
import torch
from torch.nn.attention.flex_attent…
-
Torch reference:
https://github.com/open-mmlab/mmdetection3d/tree/main/projects/PETR
**Torch graphs:**
Transformer module: [model_petr_transformer.gv.pdf](https://github.com/user-attachments/fil…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
https://andrew.cmu.edu/~yangli1/Forecaster_ECAI2020.pdf
## Detailed Description
Only skimmed it but a transformer graph model, but for more general forecasting.
## Context
## Possible Implementat…