-
Is it possible to use vision transformer properly on a 12 gb GPU machine?
-
- https://arxiv.org/abs/2106.04560
- 2021
Vision Transformer(ViT)などの注意力ベースのニューラルネットワークは、近年、多くのコンピュータビジョンのベンチマークで最先端の結果を達成しています。
優れた結果を得るためには、スケールが主要な要素となるため、モデルのスケーリング特性を理解することが、将来の世代を効果的に設計するための…
e4exp updated
2 years ago
-
XFeiF updated
3 years ago
-
Hi @echarlaix @IlyasMoutawwakil
The bug comes from SentenceTransformer, when I loading a sentence transformer model like `IPEXModel.from_pretrained("intfloat/e5-mistral-7b-instruct", export=True)`…
-
1.Public code and paper link:
I have installed the following code: [https://github.com/AILab-CVC/GroupMixFormer](url)
paper link : [https://arxiv.org/abs/2311.15157](url)
2. What does this work d…
-
请问有,有没有试过vision transformer类的模型量化?DeiT-S,swin等模型量化精度会有下降。
-
reference model is from: https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py
-
[paper](https://arxiv.org/pdf/2404.03214), [code](https://github.com/WalBouss/LeGrad)
## TL;DR
- **I read this because.. :** Chefer를 scholar에서 follow 하니까 메일을 보내줌 (되게 편하네!)
- **task :** expl…
-
Hello, I have a question about the transformations in the MiniViT paper.
I could find the first transformation (implemented in the MiniAttention class) in the code:
https://github.com/microsoft/Cr…
-
### 🐛 Describe the bug
When I run a [vit experiment](https://github.com/hpcaitech/ColossalAI-Examples/tree/main/image/vision_transformer/hybrid_parallel) by the following command
```
node=76
pre…