-
python: 3.9.19
torch:1.12.1
marker-pdf: 0.2.13
code : python convert.py doc_dir ouput
error info:
Traceback (most recent call last):
File "/root/marker/convert.py", line 135, in
m…
-
in load_pretrained_model
model = CambrianLlamaForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3531, in from_pretrained
) =…
-
Vision Transformers should be supported out-of-the-box by `quanto`.
The goal of this issue is to add some examples under `examples/vision`.
At the very minimum, there should be a classification …
-
Hi,
Not an actual issue, just wanted to share that I implemented your technique for Vision Transformers.
https://github.com/jacobgil/vit-explain
This includes some tweaks to get this to work for im…
-
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> import torch
>>> from PIL import Image
>>> checkpoint = "qihoo360/360VL-8B"
>>> model = AutoModelForCausalLM.from_pretrained(ch…
-
I got this error and I tried to solve it by install vision-transformer, but still show me same error
`pip install vision-transformer-pytorch`
-
# Reference
- 2021-01 Transformers in Vision: A Survey [[Paper](https://arxiv.org/pdf/2101.01169.pdf)]
-
Hi, I ma working on using vision transformers not only the vanilla ViT, but different models on UMDAA2 data set, this data set has an image resolution of 128*128 would it be better to transform the im…
-
Useful links:
- [Attention is all you need](https://arxiv.org/abs/1706.03762), first transformer paper, [useful video](https://www.youtube.com/results?search_query=self+attention+mechanism+explaine…
-
I receive this error when i run this bash command: !bash LWM/scripts/run_sample_video.sh. I have followed all the direction listed in the repo.
```
/usr/local/lib/python3.10/dist-packages/hug…