-
Meet error when using torch hub:
I think replacing utils.pad_input with utils.get_padding can solve this error.
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and found no similar bug report.
### HUB Component
Models, Training
### Bug
I trained m…
-
### System Info
Two versions of transformers:
========= NEW VERSION ==============
- `transformers` version: 4.46.1
- Platform: Linux-5.15.0-1044-nvidia-x86_64-with-glibc2.35
- Python version: …
-
I'm using the exmple code, when running
```
predictor = torch.hub.load("Stable-X/StableNormal", "StableNormal", trust_repo=True)
```
It raises:
huggingface_hub.errors.HFValidationError: Repo i…
kexul updated
1 month ago
-
运行在容器中 running in a container
使用gpu会报显存不足 use gpu
```shell
ERROR: Model running Error: CUDA out of memory. Tried to allocate 2.37 GiB. GPU 0 has a total capacty of 23.69 GiB of which 2.03 GiB is fr…
-
### Describe the bug
Tried to run the THUDM/CogVideoX1.5-5B model using Diffusers from git (20th Nov, approx 8:30am GMT)
The script failed with
```
hidden_states = F.scaled_dot_product_attent…
-
### System Info
I'm using `facebook/m2m100_418M` translation model.
From version 4.46.0 it downloads another model which wieghts ~2 GB.
I'm using python 3.11, in `ubuntu`
### Who can help?
…
-
When runing hubconf.py at this line `model = torch.hub.load('yvanyin/metric3d', 'metric3d_vit_small', pretrain=True)`, it returns a runtime errror.
```
Using cache found in C:\Users\xxxxx/.cache\…
-
SD3 flux branch is broken when doing fresh install @kohya-ss @bmaltais
```
Collecting wcwidth>=0.2.5
Using cached wcwidth-0.2.13-py2.py3-none-any.whl (34 kB)
INFO: pip is looking at multiple …
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …