-
Per the title says. Could you include local-attention==1.9.14 in the requirements.txt
-
### Issue description
pypots/nn/modules/reformer/local_attention.py:31: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead
Ma…
-
When attempting to use "from unsloth import FastLanguageModel", the following error pops up:
---------------------------------------------------------------------------
ImportError …
-
### System Info
```shell
optimum-habana 1.14.0.dev0
HL-SMI Version: hl-1.18.0-fw-53.1.1.1
Driver Version: 1.18.0-ee698fb
```
### Information
- [X] The off…
-
I dont know why whenever i set use_dora = True it always give me this error when i train:
`RuntimeError Traceback (most recent call last)
Cell In[26], line 1
----> 1 tr…
-
### 🐛 Describe the bug
```
import os, sys
import torch
from functools import lru_cache, partial
from torch.nn.attention.flex_attention import (
_DEFAULT_SPARSE_BLOCK_SIZE,
create_bl…
-
In my opinion, the most attractive use case for certain timm encoders such as ResNet-18, which is also available in torchvision, is that timm generally allows for various additional configuration para…
-
Still need 7 mins to run test code 'python -u infer_audio2vid.py'.
How to accelerate it? Thanks!
```cmd
root@dsw-448852-67578dcfc6-fh9sp:/mnt/workspace/EchoMimic# python -u infer_audio2vid.py
…
-
If I'm not wrong, hypergraph convolution (HC) seem really like self-attention mechanism (or earlier, non-local network) but with different score function and aggregating function, right? Basically, th…
-
### 🐛 Describe the bug
Hi, I was testing FlexAttention by comparing its output with that of `nn.MultiheadAttention` and `torch.nn.functional.scaled_dot_product_attention`. In the end, I tracked down …
EIFY updated
13 hours ago