-
in load_pretrained_model
model = CambrianLlamaForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3531, in from_pretrained
) =…
-
python: 3.9.19
torch:1.12.1
marker-pdf: 0.2.13
code : python convert.py doc_dir ouput
error info:
Traceback (most recent call last):
File "/root/marker/convert.py", line 135, in
m…
-
- Link: https://arxiv.org/abs/2104.11227
-
Hello again!
Would it be possible to modify the GMP fine tune script to train a LoRA with PEFT for the CLIP VIT-G model? Then merge the LoRA with the model to get a new CLIP-G model?
Chat-GPT se…
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/2105.07926)
- [CVF](https://openaccess.thecvf.com/content/CVPR2022/html/Mao_Towards_Robust_Vision_Transformer_CVPR_2022_paper.html)
- [GitHub](https://githu…
-
Hello,
I would like to contribute to a tutorial on [Hyperbolic Vision Transformers](https://arxiv.org/abs/2203.10833) by Ermolov, A. et al (2022).
The paper describes a vision transformer with …
-
我使用finetune.sh 全参微调的时候,报如下错误。
Traceback (most recent call last):
File "/cfs/cfs-1vhb8svx/hkj/InternLM-XComposer/finetune/finetune.py", line 313, in
train()
File "/cfs/cfs-1vhb8svx/hkj…
-
Vision Transformers should be supported out-of-the-box by `quanto`.
The goal of this issue is to add some examples under `examples/vision`.
At the very minimum, there should be a classification …
-
Hi I wanted to know if there is a version of FullGrad which could be applied on Vision Transformers like ViT or the Swin Transformer, or if there are some small changes that could be done in the code …
-
Hi, I noticed that you submitted a paper titled “Masked Attention as a Mechanism for Improving Interpretability of Vision Transformers” to Medical Imaging with Deep Learning 2024. Do you plan to integ…