AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.69k stars 26.62k forks source link

[Bug]: Inpaint stopped working correctly #15849

Open marvelsanya opened 4 months ago

marvelsanya commented 4 months ago

Checklist

What happened?

I've been using Stable Diffusion web UI for a long time. Windows 10, Nvidia GeForce GTX 1060 (6GB). Recently I used ControlNet and clicked on the Inpaint option (I had some models, but there was no model specifically for Inpaint). At that moment, the power went out and I did not attach any importance to the sudden shutdown of the PC. After that, I noticed that standard Inpaint does not work correctly: it ignores my prompts and even a banal replacement of an object or color is now impossible. There are no errors, Inpaint just started producing very bad results, which only get worse as Denoising strength increases. For example, when trying to finish drawing a person, I end up with a door or a tree. I decided to completely reinstall SD (including python and git), did a clean install 2 times. Nothing helped, Inpaint is still broken, regardless of Extensions or the specified settings in the web-user file... Help pls! P.S. sorry for bad english. 64ddb6898b 340e6b10f1 61beeff3cf

Steps to reproduce the problem

  1. Upload the image to img to img/ inpaint
  2. Set any settings, select “original”.
  3. We get a bad result...

What should have happened?

In a good version, the image will take into account your prompts

What browsers do you use to access the UI ?

Google Chrome, Other

Sysinfo

sysinfo-2024-05-20-22-58.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --autolaunch --medvram --xformers --theme=dark --disable-safe-unpickle
CHv1.8.7: Get Custom Model Folder
ControlNet preprocessor location: D:\Programs\STABLE DIFFUSION\webui\extensions\sd-webui-controlnet\annotator\downloads
2024-05-20 18:32:02,480 - ControlNet - INFO - ControlNet v1.1.449
Loading weights [07919b495d] from D:\Programs\STABLE DIFFUSION\webui\models\Stable-diffusion\picxReal_10.safetensors
CHv1.8.7: Set Proxy:
2024-05-20 18:32:02,849 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: D:\Programs\STABLE DIFFUSION\webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
D:\Programs\STABLE DIFFUSION\system\python\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Startup time: 11.1s (prepare environment: 2.3s, import torch: 3.9s, import gradio: 0.8s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.6s, load scripts: 1.4s, create ui: 0.7s, gradio launch: 0.4s).
Applying attention optimization: xformers... done.
Model loaded in 3.2s (load weights from disk: 0.8s, create model: 0.4s, apply weights to model: 1.7s, calculate empty prompt: 0.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:11<00:00,  1.43it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:10<00:00,  1.47it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:23<00:00,  1.36it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:10<00:00,  1.46it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:10<00:00,  1.48it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:23<00:00,  1.36it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:23<00:00,  1.52it/s]

Additional information

No response

xblitzarts commented 4 months ago

imagen_2024-05-21_154231011

Hey! I have the same problem. I work on RunPod with an RTX 3090 24GB. A few days ago, I could do an inpaint on an image of around 1200 x 1200 px, making a mask of 960 px squared. The details I could generate were impressive at a denoising strength of 0.4 - 0.45. Now, within those ranges, I get blurry, low-quality results. If I increase the denoising (to get more detail), it results in an image with no detail at all, nothing, 0 detail compared to the base image. At first, I thought the VAE wasn't activated, but then I realized that even when it was activated, it stayed the same. Help!

P.S. I remember this happened to me a few times in the past with older versions. Due to a system error, I had to restart the pod, and when I did, the results I started to get were of much lower quality. But back then, I would restart everything again and it would fix itself. Now I've reinstalled everything the same way it always worked for me a couple of times, and nothing, it still stays the same

In the example It’s especially noticeable in the overall color and texture. In the edited version, you can see it lacks textures and appears somewhat faded

pyphan1 commented 2 weeks ago

This is happening because you are using a non-inpainting model to do inpainting, inpainting should use the custom inpainting model and not the normal text-to-image or image-to-image model.