-
Hi guys,
I'm trying [XLabs-AI/flux-controlnet-canny-v3](https://huggingface.co/XLabs-AI/flux-controlnet-canny-v3/blob/main/flux-canny-controlnet-v3.safetensors) with stable-diffusion.cpp and run in…
-
Greetings!
Thanks for sharing the code.
I want to train on my own dataset to adapt it to downstream tasks, how should I do it?
-
I am currently working on full SDXL fine-tuning but have encountered challenges in finding the best code due to conflicting information from various sources. Previously, I used the GitHub repository […
-
I tried using run_example.sh script on some random wav files other than IS1009a.wav and it results in the same speaker label spk0 for all the time segments. Are there any specific conversion methods I…
-
Hi there, I love Kritta Diffusion but for a while now I've had this issue where the live painting process will fail after a while. I didn't initially have this problem, but now I can't seem to get it …
-
@tomaarsen
Just wanted to know if clip (text + image) embedding models will have an onnx quantized model? i tried finding it everywhere but had no luck. If it is there can you please point me to it?…
-
### Describe the bug
`transformers` added `sdpa` and FA2 for CLIP model in https://github.com/huggingface/transformers/pull/31940. It now initializes the vision model like https://github.com/huggingf…
-
yep, we need that, hope that medium will also work
-
When I put q3 t5xxl and clip the ccp says it's f16 and vae f32 these are wrong which causes termux to crash. Flux model is shown correctly if I use flux q2 it shows q2 please fix.
Another issue: th…
-
### Model description
Do we support Model2Vec embedding models?
E.g: https://huggingface.co/minishlab/potion-base-8M
https://minishlab.github.io/tokenlearn_blogpost/
### Open source status
- […