-
How can we use high-quality modes?
-
I have loaded the biases and weights for the GPT-2 model using `AutoModelForCausalLM`, but their sizes do not match the expected dimensions. The error I encountered is:
```
Copying a param with sha…
-
This is a brief example that has been edited from the README.md file:
```python
import torch
import torch.nn as nn
import torch.optim as optim
from flashfftconv import FlashDepthWiseConv1d
B=4
…
-
How can i Solve it? i cant even do anything
i searched a lot but didn't find any solutions .
-
## Issue
Due to the change of Helium support in CMSIS-NN 5.0 and later, when the e-AI translator generated model is built with Helium enabled, the fully connected operation result could be incorrect.…
-
Dear authors!
Thank you for sharing your work with the community. Recently, I tried to inference Burstormer on Synthetic Burst SR validation set and found that provided **checkpoint does not match …
-
RuntimeError: Error(s) in loading state_dict for LAVTOne:
size mismatch for backbone.layers.0.blocks.0.attn.relative_position_bias_table: copying a param with shape torch.Size([169, 4]) from …
-
### Describe the bug
XFormer will fail when passing attention mask with its last dimension not being a multiple of 8 (i.e. key's sequence length) under bfloat16. This seems to be because xformer ne…
-
RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hap…
-
Hi, thanks for your work.
When I run the demo code from: https://huggingface.co/lmms-lab/LLaVA-Video-72B-Qwen2 in your LLaVA-NeXT repository, some errors happened:
```
size mismatch for vision_mode…