-
I am following the instructions here for compiling SDXL Turbo: https://huggingface.co/docs/optimum-neuron/tutorials/stable_diffusion#stable-diffusion-xl-turbo, i.e. running
```
optimum-cli export…
-
The code example in the readme for using SegMoE with SDXL-Turbo appears to be slightly wrong. It imports the pipeline correctly, but then uses `SegMoETurboPipeline` out of nowhere. I tried to import t…
-
### Issue Description
I have been using SDXL Turbo on main with no issue, but currently I can't use the model on dev. The version platform description below might show that it's using the original ba…
-
i got an issue. I am getting the painting as the output image again.
I used "TurboVisionXL" Model. LORA is " latent-consistency / lcm-lora-sdxl "
Please help me
Also provide us a wo…
-
### What happened?
As the HIP runtime driver is built and deployed with IREE pip packages, and is generally slated to be the canonical HAL driver for rocm devices, I have noticed (alongside the kno…
-
### Processor
M1 Pro (or later)
### Memory
16GB
### What happened?
Trying to use a [SDXL Turbo model (dreamshaper-xl-turbo)](https://huggingface.co/Lykon/dreamshaper-xl-turbo) that I converted to…
-
**Describe the bug**
Previously, I was able to run this file successfully, but now I found that the calibration dataset (https://huggingface.co/datasets/laion/laion2B-en-aesthetic
) we need to down…
-
SDXL Turbo took 3 minutes to generate an image. I was using [krita](https://github.com/Acly/krita-ai-diffusion) with a comfyui backend on a rtx 2070 and I was using about 5.3 gb of vram in the generat…
-
It was [said in the original repo](https://github.com/Fanghua-Yu/SUPIR/issues/38), and you also thought it was the case, that it is possible to get running within 12GB VRAM, but I just can't get it to…
-
The generation process eats all of my 32GB of ram.
This only happens when using openvino on every models except for rupeshs/sdxl-turbo-openvino-int8.
Specs:
CPU Ryzen 5 5600G
GPU NVIDIA RTX 4…