-
how do i call a local SD model?
i made a folder 'model' placed a sd model,
gave this command - python sd_scroll.py "fantasy mushroom forest|peaceful landscape" -s 40 -d H -m model/dreamshaper_8.saf…
-
When running the Colab for Stable Diffusion on Premium and High RAM, I get the following errors:
![FEA8DECD-0A01-4D9C-98CF-6E43DF265DAB](https://user-images.githubusercontent.com/66912510/204038184…
-
If I understand correctly, all the weights of the CLIP text encoder are optimized, which naturally has some non-negligible computational cost.
Why was this chosen as opposed to just training part o…
-
### Model/Pipeline/Scheduler description
This work aims to learn a high-quality text-to-video (T2V) generative model by leveraging a pre-trained text-to-image (T2I) model as a basis. It is a highly…
-
When using the Replicate API with my own images, I receive a memory issue. This memory issue arises because the app is not resizing the image to the required size for the model. If the images are too …
-
Hunyuan-DiT is a new image generation AI. Benchmarks show that it exceeds SD3 overall.
However, the model is relatively complex and uses a lot of VRAM for training. So I thought it would be nice to b…
-
When will stage b training code be released?
Thanks!!
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Many modern processors have bfloat16 suppor…
-
**Describe the bug**
When setting up Diffusion Toolkit and adding the Outdir which contains the generations, the toolkit only finds 24 out of several thousand. The InvokeAI outdir folder structure ha…
-
I'm getting this traceback with errors when running the CLIP captioning:
```
Traceback (most recent call last):
File "C:\Automatic1111\extensions\sd_smartprocess\smartprocess.py", line 360, in …