-
The Inference speed of those model is almost instant,
Adding a small image to webpages could add to the immersion with a small cost in time to generate, can this be done? maybe as an option?
-
FYI: SDXL Turbo and SDXL are same architecture.
Thanks in advance.
-
I'm trying to switch to AutoencoderTiny from AutoencoderKL in demo_txt2img_xl with Turbo model. After some attempts to changing [models.py](https://github.com/NVIDIA/TensorRT/blob/release/10.0/demo/Di…
-
Hi,
i am wondering if there is any known problems with using stable-fast on few-steps generation models such as [SDXL Lightning](https://huggingface.co/ByteDance/SDXL-Lightning)? My experience is …
-
I ran into numerous problems getting this installed.
First, I think your documentation left out the creating Models folder. I found this in sd3_infer.py
# NOTE: Must have folder `models` wit…
-
### Checklist
- [X] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
I use `SD_METAL=ON` to build `sd.cpp` on m1. However, the generated image seems to be blured or tends to be in the cartoon-like style. This happens on all models (v2, SDXL, SDXL turbo). This problem o…
Ucag updated
7 months ago
-
Hey sdxl turbo runs fantastic!! But is y here a way to change the image resolution before generation?
-
### Describe the bug
Based on #6368, I attempted the code provided for pruning the unsupported layers, but it actually prunes the entire dictionary.
### Reproduction
```
from requests import get
…
-
im getting this error:
----------------[start]------------------
positive_prompt: a photo of an astronaut riding a horse on mars
SDXL turbo doesn't support negative_prompts
output_png_path: as…