-
## 🚀 Feature
Modularize `nn.MultiHeadAttention` / `nn.functional.multi_head_attention_forward()` and generalize the `nn.Transformer` modules so that:
1. Novel attention functions can be use…
-
Hi, I am getting this error, what's the cause ? is it XFORMERS?
Error occurred when executing DynamiCrafterI2V:
No operator found for `memory_efficient_attention_forward` with inputs:
que…
-
### Feature Description
This is more of an investigation issue that aims to help us understand what the state of the art tool or approach other OS companies are using to empower users to self host.…
-
### 🐛 Describe the bug
```python
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial res…
bhack updated
2 months ago
-
Hi,
I've encountered a few issues while using alpha-beta-CROWN for verifying the output of a simple neural network. I would greatly appreciate your assistance in resolving these concerns.
I hav…
-
I want to test `OneShot AllReduce` and `TwoShot AllReduce` separately, so I have modified the following code:
```
class LLaMAModel(Module):
def __init__(self, config: PretrainedConfig) -> Non…
-
### System Info
- `transformers` version: 4.42.4
- Platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.…
ojh31 updated
2 weeks ago
-
Traceback (most recent call last):
File "demo.py", line 16, in
model = create_model(opt)
File "/viton/Global-Flow-Local-Attention/model/__init__.py", line 32, in create_model
instance…
-
SATRN.py의 TransformerDecoderLayer의 forward()부분에서
https://github.com/bcaitech1/p4-fr-9-googoo/blob/f8ee504c37e57fb29eebb19d441feb18dc79c1df/networks/SATRN.py#L444
이 부분의 tgt를 out으로 바꿔야 할 것 같습니다.
…
-
## Description
I'm not able to properly display the members (structure fields) of a custom DataType which may be the reason on why writings are also not working.
## Background Information / Repr…