AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.95k stars 26.95k forks source link

[Bug]: how do i keep the object stay in frame??? #9379

Closed hungrydog666 closed 1 year ago

hungrydog666 commented 1 year ago

Is there an existing issue for this?

What happened?

when the image is being generated the live preview the top of her head was in frame like this:

Screenshot 2023-04-05 102139

after its being generated it comes out like this, her head is out of frame: Screenshot 2023-04-05 102124

i tried to change the upscaler to none thought it has something to do with the girl head being out of frame but it said not enough memory, i dont really understand coding so here:

Arguments: ('task(ufue5hnh93uwolq)', '(best quality, masterpiece1.2), (detailed eye:1.2), intricate detail, depth of field, 20 years old girl, long hair, crop top, pencil skirt, standing, (dark skin:1.4), (makeup:1.2), smile, (muscular:0.4), (piercing:1.3), kneehighs, thigh boots, leather jacket, club, in crowd, looking at viewer, parted bangs, choker, head tilt, pov, bag,\n', '(worst quality, low quality:1.2), text, watermark, badhandv4, child, loli,\n', [], 28, 17, False, False, 1, 1, 13, -1.0, -1.0, 0, 0, 0, False, 917, 512, True, 0.7, 1, 'None', 0, 824, 1229, [], 0, False, '', 0, <scripts.external_code.ControlNetUnit object at 0x0000018EE62095A0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {} Traceback (most recent call last): File "D:\SD5\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "D:\SD5\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "D:\SD5\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img processed = process_images(p) File "D:\SD5\stable-diffusion-webui\modules\processing.py", line 503, in process_images res = process_images_inner(p) File "D:\SD5\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "D:\SD5\stable-diffusion-webui\modules\processing.py", line 922, in sample samples = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(decoded_samples)) File "D:\SD5\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda args, kwargs: self(*args, kwargs)) File "D:\SD5\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, *kwargs) File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "D:\SD5\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "D:\SD5\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "D:\SD5\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 526, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "D:\SD5\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 138, in forward h = self.norm2(h) File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "D:\SD5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 594.00 MiB (GPU 0; 6.00 GiB total capacity; 4.17 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

do i actually need more memory or is there a way for me to do something about the upscaler so the object in the image can be in frame.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ... ...

What should have happened?

...

Commit where the problem happens

...

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

...

List of extensions

...

Console logs

...

Additional information

...

lazydevl0per commented 1 year ago

I think thats how its supposed to be. The model generates the image in steps. Starting with a blur and adding details with each step.

hungrydog666 commented 1 year ago

I think thats how its supposed to be. The model generates the image in steps. Starting with a blur and adding details with each step.

really? that sucks. <<<my confuse a**

lazydevl0per commented 1 year ago

This can be an interesting starting point to learn more about stable diffusion :) How does Stable Diffusion work?

hungrydog666 commented 1 year ago

I think thats how its supposed to be. The model generates the image in steps. Starting with a blur and adding details with each step.

wait am i missing something? is it base on the model from the image that was trained for right?

but it work when i use control net using openpose the object is in the frame, its where i want them to be, (like the girl head is in the frame isntead of out of frame) but when i dont use "openpose" in controlnet the girl head is out of frame for every image that was being generated.

hungrydog666 commented 1 year ago

This can be an interesting starting point to learn more about stable diffusion :) How does Stable Diffusion work?

i already understand stable diffusion well enough, but when upscaler was add it the image after being generate they come out like the object was out of frame for almost every image. (the object is mosty a person the top of their head is out of frame)

i never had this issue before.

eneskuluk commented 1 year ago

I think thats how its supposed to be. The model generates the image in steps. Starting with a blur and adding details with each step.

I think author mentions about top of the head and bottom part being cropped. Like they are available in preview and cropped at the final image. Firstly I thought about it was about blur, later noticed what was meant. I think that is still related to how stable diffusion works but I might be wrong, I'm not expert in the subject.