-
**Describe the bug**
Two unit test cases of ttnn.The linear test in the PETR model Transformer submodule fails with low pcc of 0.4 and 0.6
**To Reproduce**
Steps to reproduce the behavior:
1. Ch…
-
### Feature Idea
# 🐱 Sana Model Card
## Model
We introduce **Sana**, a text-to-image framework that can efficiently generate images up to 4096 × 4096 resolution.
Sana can synth…
-
There is my issue:
/home/fountain/miniconda3/bin/conda run -n Linear_Alignment --no-capture-output python /home/fountain/pycharmProjects/Linear_Alignment/demo.py
Loading checkpoint shards: 100%|███…
-
### 🚀 The feature, motivation and pitch
1. [Exphormer: Sparse Transformers for Graphs](https://arxiv.org/abs/2303.06147)
2. [SGFormer: Simplifying and Empowering Transformers for Large-Graph Represe…
-
Hi, Pointcept Team,
I notice that the RPE implementation is different between V1/V2 and V3.
### Background
----------------------------------
In Point Transformer V1, it simply follow the gene…
-
Thank you for your excellent work!
I maintain a [library](https://github.com/hp-l33/flash-bidirectional-linear-attention/blob/main/fbi_la/layers/focused_la/attention.py) implementing bi-directional…
-
Hello,
I encountered an issue while using the provided checkpoints from the linked Google Drive. Several keys related to the Transformer are missing from the state sictionary:
`Missing key(s) i…
-
### System Info
peft = 0.13.2
python = 3.12.7
transformers = 4.45.2
### Who can help?
@sayakpaul
I am using ```inject_adapter_model(...)``` to finetune a model from OpenCLIP using LoRA layers…
-
Greetings!Thanks for your work about "On Embeddings for Numerical Features in Tabular Deep Learning".
I'm having difficulty understanding the code that implements this functionality in train4.py. M…
-
Torch reference:
https://github.com/open-mmlab/mmdetection3d/tree/main/projects/PETR
**Torch graphs:**
Transformer module: [model_petr_transformer.gv.pdf](https://github.com/user-attachments/fil…