-
I am using the diffusers library with Flux-dev and Flux-schnell. I got the following script from [here](https://gist.github.com/sayakpaul/23862a2e7f5ab73dfdcc513751289bea) and modified it a bit. Are t…
-
would you support this?
2024/8/29: By adding pipe.enable_sequential_cpu_offload() and pipe.vae.enable_slicing() to the inference code of CogVideoX-5B, VRAM usage can be reduced to 5GB. Please check t…
AIFSH updated
1 month ago
-
I have a RTX 3060 and RTX 4070 in my system, both 12GB.
Since the X server runs on my RTX 4070 I have only about 11GB VRAM there so with X server running, I can run the single GPU script on the proj…
-
hi, i have tried to generate 5s of video with 768p and 12 frames/s with my P40 (24gb Vram) and gradio interface on windows. But i have this message error. i have activate cpu_offloading and i have mo…
-
When there is only 16G VRAM, an OOM error occurs in the VAE encoding stage when using a resolution of 1024.
-
`black-forest-labs/FLUX.1-dev` runs very slow. it takes about 15min to generate 1344x768 (wxh) image. Has anyone experienced the same or is it just me.
```python
pipe = FluxPipeline.from_pretr…
-
### System Info / 系統信息
Windows11Pro WSL2 Ubuntu 22.04.4 LTS
CUDA12
RTX4090
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [X] My own modified scripts / 我自己修改的脚本和任务
### Rep…
-
Tasks that have been identified and scheduled:
+ Fine-tuning support for Diffusers version models
+ Adaptation for CPU / NPU inference frameworks (e.g., Huawei, Intel devices)
+ ComfyUI adaptat…
-
### Describe the bug
Sequential offloading doesn't work when using `pytest`, but does seem to work outside of tests.
This is an issue, because we can't properly test sequential offloading on Stabl…
-
[You posted on Reddit](https://www.reddit.com/r/MachineLearning/comments/l4rnfv/p_why_are_stacked_autoencoders_still_a_thing/).
I think this is very cool.
In the Reddit post you ask if you misse…