-
Hello AnFreTh,
Thank you for your work on this project. I am currently using Mambular to process tabular data, but I am experiencing very slow training speeds. On average, each epoch is taking arou…
-
**Describe the bug**
I encountered an `AttributeError` when trying to import `se_extractor` from the `openvoice` package in Google Colab. The error message indicates that the `huggingface_hub.constan…
-
Hello,
Currently whenever the django server is loaded django-vectordb will load the embedding model into memory. Therefore, if I spawn multiple django processes, the model will be duplicated across m…
-
### System Info
transformers-cli env
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.10.1
- Py…
-
Traceback (most recent call last):
File "D:\sl\q\launch.py", line 39, in
main()
File "D:\sl\q\launch.py", line 35, in main
start()
File "D:\sl\q\modules\launch_utils.py", li…
-
### Describe the bug
At the moment one cannot use a **local** Flux1-dev or -schnell file for lack of an essential function "from_single_file()" in the diffusers.FluxPipeline library. The provided fla…
-
## Proposed refactoring or deprecation
Deprecate `log_every_n_steps` from Trainer and make it available as a parameter to loggers that have this capability.
### Motivation
Same reasons as #89…
-
### System Info
```shell
transformers==4.42.4
torch== 2.4.0+cpu
onnx==1.16.2
onnxruntime==1.18.1
optimum==1.21.2
ubuntu-22.04
```
### Who can help?
@JingyaHuang @echar
### Informa…
-
# ❓ Questions and Help
run https://github.com/facebookresearch/xformers/blob/main/xformers/benchmarks/benchmark_transformer.py
but more slower.
my env: pytorch 2.2.1,cuda 12.1.1
![runtime](https:/…
-
```
% pip --version
pip 23.2.1
% python3 --version
Python 3.9.16
% python qwen_cpp/convert.py -i /Volumes/data/huggingface/Qwen-14B-Chat -t q4_0 -o qwen7b-ggml.bin
Traceback (most recent c…