-
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.10.15
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4…
-
Why does this happen? It happens when I click to generate the video from the example image that is on the page. I spent hours trying to understand, I am not a programmer and it frustrates me :(
Pytho…
-
When I use stories15M and stories110M I got an error.
```
File "D:\_LLM_project\Development\gpt-fast\generate.py", line 114, in speculative_decode
torch.cat([cur_token.view(1), draft_tokens])…
-
### 🐛 Describe the bug
```
import torch
from triton.testing import do_bench
from torch.nn.attention.flex_attention import create_block_mask, flex_attention, noop_mask, BlockMask
import torch.nn.f…
-
**Describe the bug**
Running the most recent version of the T5 pretraining script out of the box raises a Value Error, particularly in the following line:
```
[rank0]: File "/home/miniconda3/lib/…
-
https://github.com/datamllab/LongLM
-
# ComfyUI Error Report
## Error Details
- **Node Type:** VAEEncode
- **Exception Type:** RuntimeError
- **Exception Message:** could not create a primitive
## Stack Trace
```
File "/home/sh…
-
It seems like all my batches have some underlying issue where they're all off by one, I've seen other issues opened about this, but no proper explanation, could I get some help on this?
Failed dur…
-
I run Code on Colab, in this [cell code](https://colab.research.google.com/github/HandsOnLLM/Hands-On-Large-Language-Models/blob/main/chapter09/Chapter%209%20-%20Multimodal%20Large%20Language%20Models…
-
**Is your feature request related to a problem? Please describe.**
Need to extract self attention scores from a call to the encoder block.
**Describe the solution you'd like**
Add "return_attenti…