Open Speedway1 opened 6 months ago
what exactly is your issue? I run out of memory after several iterations although I have no issue using SDXL with Fooocus and other implementations even with a lot higher resolution and iterations
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.25it/s]
start_merge_step:4
['a girl, wearing white shirt, black skirt, black tie, yellow hair,at home ', 'a girl, wearing white shirt, black skirt, black tie, yellow hair,sitting alone on a park bench.', 'a girl, wearing white shirt, black skirt, black tie, yellow hair,reading a book on a park bench.', 'A squirrel approaches, peeking over the bench. ', 'a girl, wearing white shirt, black skirt, black tie, yellow hair,look around in the park. ', 'leaf falls from the tree, landing on the sketchbook.', 'a girl, wearing white shirt, black skirt, black tie, yellow hair,picks up the leaf, examining its details closely.', 'The brown squirrel appear.', 'a girl, wearing white shirt, black skirt, black tie, yellow hair,is very happy ', 'The brown squirrel takes the cracker and scampers up a tree. ']
25%|█████████████████████████████████████████████████████▌ | 5/20 [00:03<00:09, 1.59it/s]
Traceback (most recent call last):
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/route_utils.py", line 258, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/blocks.py", line 1710, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/blocks.py", line 1262, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/utils.py", line 517, in async_iteration
return await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/utils.py", line 510, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/utils.py", line 493, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/gradio/utils.py", line 676, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/gradio_app_sdxl_specific_id.py", line 563, in process_generation
id_images = pipe(id_prompts, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1216, in __call__
noise_pred = self.unet(
^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/models/unet_2d_condition.py", line 1177, in forward
sample = upsample_block(
^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/models/unet_2d_blocks.py", line 2354, in forward
hidden_states = attn(
^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/models/transformer_2d.py", line 392, in forward
hidden_states = block(
^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/models/attention.py", line 329, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/storydiffusion/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 527, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/gradio_app_sdxl_specific_id.py", line 133, in __call__
hidden_states = self.__call1__(attn, hidden_states,encoder_hidden_states,attention_mask,temb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/axt/zeugs/AI/StoryDiffusion/gradio_app_sdxl_specific_id.py", line 193, in __call1__
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 1.78 GiB. GPU 0 has a total capacity of 19.98 GiB of which 1.15 GiB is free. Of the allocated memory 15.98 GiB is allocated by PyTorch, and 2.13 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
^CKeyboard interruption in main thread... closing server.
``
@Bratzmeister Sorry for the VRAM issue. We are currently trying to optimize the GPU RAM usage of the code. New updates will be released within the next couple of days.
@Z-YuPeng very much appreciated. currently I either get an error like above or my computer/GPU hangs to the point I have to forcefully restart it
@Z-YuPeng Hi, can you provide the approximate VRAM occupancy?
Has anyone successfully run this on Radeon/AMD?