Closed newgrit1004 closed 8 months ago
It looks like your image size is significantly smaller for onnx-web. Please post a screenshot of the webUI.
Using the same size and other parameters should help, but images may never be totally reproducible between tools, due to differences outside of our control. Images should be fairly reproducible within onnx-web, except for a few schedulers that make no attempt to be deterministic (DDIM with eta > 0 and Euler Ancestral, iirc).
The default settings for the ONNX runtime are not perfectly deterministic: https://github.com/microsoft/onnxruntime/issues/4611, https://github.com/microsoft/onnxruntime/issues/7642#issuecomment-842583213. I think there might be a flag to run ORT in a more deterministic mode, but I'm not able to find it in their docs at the moment. This is a sufficiently well-known issue that both the HuggingFace and Civitai docs mention it as well: https://huggingface.co/docs/diffusers/using-diffusers/reproducibility, https://github.com/civitai/civitai/wiki/Image-Reproduction.
All of that said, LoRAs for ONNX use a completely custom implementation, so if you think there are issues with that math, please let me know. It's entirely possible that I have not implemented some operator correctly.
Were you able to produce a similar image when using the same parameters, or is this still an issue?
I think the issue came from the different prompt weighting between diffusers and stable-difusion webui. I got the similar issue on https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
Hi,
I got different result between stable-diffusion-webui and onnx-web when I gave the same input.
It doesn't mean applying lora is not working on onnx web. It looks like applying LoRA is working but the generated image is different.
stable diffusion webui
onnx-web
I didn't use upscale and correction method on onnx-web as well.
How can I generate the same image when I use onnx-web using lora?