invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.84k stars 2.45k forks source link

[bug]: Could not generate image #3028

Closed GihanAbenayake closed 1 year ago

GihanAbenayake commented 1 year ago

Is there an existing issue for this?

OS

Windows

GPU

cuda

VRAM

4GB

What version did you experience this issue on?

2.3.2

What happened?

Opened Invoke with Web based Browser Typed in: Anime Girl with Green hair and Red eyes into Text to Image prompt Set width and Height to 320 x 320 Steps to 50 Invoke

Screenshots

Traceback (most recent call last): File "D:\InvokeAI.venv\lib\site-packages\ldm\generate.py", line 559, in prompt2image results = generator.generate( File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\base.py", line 115, in generate image = make_image(x_T) File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\txt2img.py", line 45, in make_image pipeline_output = pipeline.image_from_embeddings( File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 419, in image_from_embeddings result_latents, result_attention_map_saver = self.latents_from_embeddings( File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 445, in latents_from_embeddings result: PipelineIntermediateState = infer_latents_from_embeddings( File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 178, in call for result in self.generator_method(*args, kwargs): File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 481, in generate_latents_from_embeddings step_output = self.step(batched_t, latents, conditioning_data, File "D:\InvokeAI.venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 525, in step noise_pred = self.invokeai_diffuser.do_diffusion_step( File "D:\InvokeAI.venv\lib\site-packages\ldm\models\diffusion\shared_invokeai_diffusion.py", line 166, in do_diffusion_step unconditioned_next_x, conditioned_next_x = self._apply_standard_conditioning( File "D:\InvokeAI.venv\lib\site-packages\ldm\models\diffusion\shared_invokeai_diffusion.py", line 207, in _apply_standard_conditioning both_results = self.model_forward_callback(x_twice, sigma_twice, both_conditionings) File "D:\InvokeAI.venv\lib\site-packages\ldm\invoke\generator\diffusers_pipeline.py", line 559, in _unet_forward return self.unet(latents, t, text_embeddings, File "D:\InvokeAI.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 582, in forward sample, res_samples = downsample_block( File "D:\InvokeAI.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 837, in forward hidden_states = attn( File "D:\InvokeAI.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\transformer_2d.py", line 265, in forward hidden_states = block( File "D:\InvokeAI.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\attention.py", line 291, in forward attn_output = self.attn1( File "D:\InvokeAI.venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\cross_attention.py", line 205, in forward return self.processor( File "D:\InvokeAI.venv\lib\site-packages\diffusers\models\cross_attention.py", line 456, in call hidden_states = xformers.ops.memory_efficient_attention( File "D:\InvokeAI.venv\lib\site-packages\xformers\ops\fmha__init.py", line 197, in memory_efficient_attention return _memory_efficient_attention( File "D:\InvokeAI.venv\lib\site-packages\xformers\ops\fmha__init__.py", line 293, in _memory_efficient_attention return _memory_efficient_attention_forward( File "D:\InvokeAI.venv\lib\site-packages\xformers\ops\fmha\init__.py", line 309, in _memory_efficient_attention_forward op = _dispatch_fw(inp) File "D:\InvokeAI.venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 95, in _dispatch_fw return _run_priority_list( File "D:\InvokeAI.venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 70, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(10, 1600, 1, 64) (torch.float32) key : shape=(10, 1600, 1, 64) (torch.float32) value : shape=(10, 1600, 1, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: device=cpu (supported: {'cuda'}) flshattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) tritonflashattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) triton is not available smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 64 >> Could not generate image.

Additional context

No response

Contact Details

No response

GihanAbenayake commented 1 year ago

Waifu Diffusion 1.4 FYI

hipsterusername commented 1 year ago

Your installation doesn’t appear to be using your gpu. There’s a reference to device being “cpu”.

You might consider reinstalling

gy2256 commented 1 year ago

Same issue here. It's not using GPU.