-
If you run the example workflow you will get a 'torch not compiled with cuda' error on macOS/OSX.
To fix this, open nodes.py within the custom_nodes/ComfyUI-AnimateAnyone-Evolved folder.
Replac…
-
thank for your code,but why is the image I generated completely black, the answer is important for me.looking forward to your reply.the following is my code:
import torch
from diffusers import Dif…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
I generated a 27x4x15 x/y/z grid of 512x640 images …
-
source:
https://github.com/huggingface/diffusers/blob/main/examples/research_projects/diffusion_dpo
sdbds updated
5 months ago
-
Hi,
Thank you for your excellent codes and detailed documentation on how to incorporate DPM-solver in our own project!
I try to substitute DDIM with DPM-solver but fail to obtain comparable resu…
-
Hi, I'm not at your level and was wondering how I could add paint with words to my multicontrolnet pipeline. Here's code that works for example (partial):
```
controlnet = [
ControlNetModel…
-
**Describe the bug**
I tried converting [epicphotogasm_lastUnicorn ](https://civitai.com/models/132632/epicphotogasm) with 768x768 or 1024x1024 and the conversion fails. The model converted successfu…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Cannot generate images. Not sure what changed since a few…
-
Currently not everything works with half precision.
TextEncoder and VAE-Decoder works fine.
Unet results in `nan`s
The problem occurs here:
`MIGRAPHX_TRACE_EVAL=2 /code/AMDMIGraphX/build/b…
-
Hello This is a great job!! I want to ask about the specific details of this version of the training
1). Is the data set filtered from laion2B and coyo-700m like face plus?
2). Start training fr…