-
1、启动后,qwen微调模型输出错误:
/tmp/Qwen-SDXL-Turbo# python web_demo.py
HTTP 文件服务器启动在端口 8001
Your device does NOT seem to support bf16, you can switch to fp16 or fp32 by by passing fp16/fp32=True in "AutoMode…
-
Hi!
Thank you for this great work.
I'm trying to run [SDTurbo](https://huggingface.co/stabilityai/sd-turbo) with diffusers.js.
I've followed the instructions from [this issue](https://github.…
-
### Issue Description
I'm likely doing something wrong, but I installed automatic, it works fine with the 1.5 model. I then downloaded the sdxl base and refiner, dropped them into modles/stable-diffu…
-
This is supposed to allow users to load parameters directly by pasting parameters to prompt.
But there are numerous erros where the settings are not transferred or done in a wrong way.
For example:
…
-
There are a bunch of models in huggingface that would be good to test if it compiles and accurate.
The most downloaded onnx models would be a good start: https://huggingface.co/models?library=onnx&so…
-
### Issue Description
I am having an issue with the following model upon initial install:
arget=/home/flaniganp/automatic/models/Stable-diffusion/sd_turbo.safetensors
### Version Platform Descrip…
-
### Describe the bug
Passing args like `clip_skip` or `cfg_scale` to a pipeline instantiated with the "lpw_stable_diffusion_xl" pipeline cause a crash.
### Reproduction
```python
from diffus…
T145 updated
11 months ago
-
### Describe the bug
I have a script that sets up a couple of models for a pipeline.
```python
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix", …
-
### Issue Description
Going to be related to something in today's dev updates I think, yesterday this was not an issue at all. I've tried several different samplers and two different models, one a Tu…
-
**Read Troubleshoot**
[x] I confirm that I have read the [Troubleshoot](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md) guide before making this issue.
**Describe the problem**
…