-
**Describe the bug**
Running the most recent version of the T5 pretraining script out of the box raises a Value Error, particularly in the following line:
```
[rank0]: File "/home/miniconda3/lib/…
-
Latest `transformers=4.44.2` packages from HuggingFace uses `torch.isin()` which prevents graph break in certain control flows. It also adds the condition branch to avoid `copy.deepcopy` which is not …
-
Hi! I'm interested in using the rotary embeddings with `x_pos=True` so my transformer is length-extrapolable. However, I noticed the readme mentions this technique works only with autoregressive trans…
-
Hey, this is my first post.
I wanted to ask about how one implements prompt weighting within the architecture.
This is the base generation code, which works.
`image = ip_model.generate(
…
-
### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bah…
-
I can run the code when num_feature_levels = 1.
When num_feature_levels = 4, here is the error (ref_frame_num = 10):
File "deformable_transformer_multi.py", line 231, in forward
ref_spatia…
-
hello.After I imported requirement.txt and setup.py, the following problems occurred when running train. Could you please answer them? Thank you
Traceback (most recent call last):
File "F:\MTR-mas…
-
![image](https://github.com/MarkFzp/act-plus-plus/assets/12107803/32566a91-4ac4-4a59-affb-f354e56c9a9a)
in detr_vae.py, the 285 line, change `encoder = build_transformer(args)` to `encoder = build_…
-
This is the future warning we are currently reciving:
transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. T…
-
Thanks for sharing, it's very interesting, I also want to make a .npy file. Follow your instructions to perform the installation step by step, making no mistakes until the last step.
My error message…