-
Trying to quantise some flux models to lower the vram needs and I get that error.
```
(venv) C:\AI\llama.cpp\build>bin\Debug\llama-quantize.exe "C:\AI\ComfyUI_windows_portable\ComfyUI\models\chec…
-
#### Description:
Hi there! 👋
I’m interested in exploring the compatibility between `flux-fp8-api` and **[OpenFLUX.1](https://huggingface.co/ostris/OpenFLUX.1)**. Specifically, I’m curious if th…
-
Hi, Thanks for your work on CatVTON.
Is it possible to train the CatVTON architecture with the Flux model? I think Flux could enhance the quality for try-on tasks.
-
@cg123 Can it support architectures such as stable diffusion Xl and flux dev?
-
### Name and Version
bitnami/flux 2.3.20
### What architecture are you using?
None
### What steps will reproduce the bug?
1. Pre-render the helm chart with `helm template fluxcd bitnami/flux --so…
-
### Is there an existing issue for this problem?
- [X] I have searched the existing issues
### Operating system
Windows
### GPU vendor
Nvidia (CUDA)
### GPU model
4090 Mobile
### GPU VRAM
16G…
-
-
Wondering if you'd be interested to make a version of this that can work with FLUX?
-
**Target:** Measure the scalability of FLUX.1 on NVIDIA Hopper architecture (both H100 & H200) using different model parallelism strategies (see [Flux.1 Performance Overview](https://github.com/xdit-p…
-
Torch has support for float8 matmul kernels, and it seems like they are faster than bf16 on Ada and above architectures. TorchAO supports training in fp8. This has been explored in a few newer optimiz…