-
@davidecaroselli @nicolabertoldi
1. Back-up your existing engines so that you can copy them back later.
2. Reinstall Ubuntu 18.02 from a USB stick. Note that Ubuntu will not recognize your graphi…
-
### Describe the bug
I was trying to run PixArt training of 512x512 model following your tutorial, but got this error:
`{'clip_sample', 'clip_sample_range'} was not found in config. Values will be…
kopyl updated
3 months ago
-
It would be nice if you can add the latency results to the README as well. I am planning to use this for an industry application, but before experimenting, it would be nice to know if it's even a feas…
-
I think it would be useful to document what libs are required (and any other requirements) for running wgpu in cloud instances where the GPUs have no physical display output.
This is only for Linux…
-
When I executed the bash script below on multiple GPUs,
```bash
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
accelerate launch --mix…
-
### Describe the bug
Using `--prediction_type="v_prediction"` with the example `text_to_image_lora.py` script leads to very weird images:
![image](https://github.com/huggingface/diffusers/assets/3…
-
great work! I see the paper said `Therefore, we have opted to utilize the Stable Diffusion 2 v-prediction model as our base model for fine-tuning`, but the code uses the sample __call__ function with …
-
Hey. Is there any chance you have this dataset cached locally and can send it to me?
https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus
I'm going to train miniSDXL (l…
kopyl updated
9 months ago
-
mat1 and mat2 shapes cannot be multiplied (2x1024 and 768x320)
-
### Describe the bug
I ran into this bug: https://github.com/huggingface/diffusers/issues/5897
so I used this for the training: https://github.com/huggingface/diffusers/blob/1477865e4838d887bb93750d…
kopyl updated
7 months ago