invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.66k stars 2.43k forks source link

Image to image seems broken #1238

Closed genevera closed 1 year ago

genevera commented 2 years ago

Describe your environment

diff --git a/docs/installation/INSTALL_MAC.md b/docs/installation/INSTALL_MAC.md index e4acb2c..3785c9b 100644 --- a/docs/installation/INSTALL_MAC.md +++ b/docs/installation/INSTALL_MAC.md @@ -89,7 +89,7 @@ While that is downloading, open Terminal and run the following commands one at a

!!! todo "Clone the Invoke AI repo"

-```bash

Describe the bug web app crashes when trying to do img->img To Reproduce Steps to reproduce the behavior:

  1. Go to 'image to image'
  2. Click on 'invoke'
  3. watch your web server log output
  4. See error:
    IZ5RaZrXc_kznQ3EAAAI: Received packet MESSAGE data 2["generateImage",{"prompt":"gustave dore illustration of janet jackson","iterations":1,"steps":28,"cfg_scale":30,"threshold":0,"perlin":0,"height":448,"width":640,"sampler_name":"k_heun","seed":3286025605,"seamless":false,"hires_fix":false,"progress_images":false,"init_img":"outputs/000029.949416368.postprocessed.png","strength":0.34,"fit":true,"variation_amount":0},false,false]
    received event "generateImage" from nlZO4o-cWuxNu9U2AAAJ [/]
    >> Image generation requested: {'prompt': 'gustave dore illustration of janet jackson', 'iterations': 1, 'steps': 28, 'cfg_scale': 30, 'threshold': 0, 'perlin': 0, 'height': 448, 'width': 640, 'sampler_name': 'k_heun', 'seed': 3286025605, 'seamless': False, 'hires_fix': False, 'progress_images': False, 'init_img': 'outputs/000029.949416368.postprocessed.png', 'strength': 0.34, 'fit': True, 'variation_amount': 0}
    ESRGAN parameters: False
    GFPGAN parameters: False
    emitting event "progressUpdate" to all [/]
    TMGH61eJKVvwwHJsAAAC: Sending packet MESSAGE data 2["progressUpdate",{"currentStep":1,"totalSteps":9,"currentIteration":1,"totalIterations":1,"currentStatus":"Preparing","isProcessing":true,"currentStatusHasSteps":false,"hasError":false}]
    IZ5RaZrXc_kznQ3EAAAI: Sending packet MESSAGE data 2["progressUpdate",{"currentStep":1,"totalSteps":9,"currentIteration":1,"totalIterations":1,"currentStatus":"Preparing","isProcessing":true,"currentStatusHasSteps":false,"hasError":false}]
    >> loaded input image of size 2560x1792 from /home/genevera/InvokeAI/outputs/img-samples/000029.949416368.postprocessed.png
    >> image will be resized to fit inside a box 640x448 in size.
    >> after adjusting image dimensions to be multiples of 64, init image is 640x448
    Generating:   0%|                                                                                                                                                                                                                                                                        | 0/1 [00:00<?, ?it/s]>> Sampling with k_heun starting at step 19 of 28 (9 new sampling steps)
    0%|                                                                                                                                                                                                                                                                                    | 0/9 [00:00<?, ?it/s]
    Generating:   0%|                                                                                                                                                                                                                                                                        | 0/1 [00:00<?, ?it/s]
    Traceback (most recent call last):
    File "/home/genevera/InvokeAI/ldm/generate.py", line 426, in prompt2image
    results = generator.generate(
    File "/home/genevera/InvokeAI/ldm/invoke/generator/base.py", line 79, in generate
    image = make_image(x_T)
    File "/home/genevera/InvokeAI/ldm/invoke/generator/img2img.py", line 45, in make_image
    samples = sampler.decode(
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "/home/genevera/InvokeAI/ldm/models/diffusion/ksampler.py", line 122, in decode
    samples,_ = self.sample(
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "/home/genevera/InvokeAI/ldm/models/diffusion/ksampler.py", line 205, in sample
    K.sampling.__dict__[f'sample_{self.schedule}'](
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "/home/genevera/InvokeAI/src/k-diffusion/k_diffusion/sampling.py", line 103, in sample_heun
    denoised = model(x, sigma_hat * s_in, **extra_args)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/home/genevera/InvokeAI/ldm/models/diffusion/ksampler.py", line 41, in forward
    uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/home/genevera/InvokeAI/src/k-diffusion/k_diffusion/external.py", line 114, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
    File "/home/genevera/InvokeAI/src/k-diffusion/k_diffusion/external.py", line 140, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
    File "/home/genevera/InvokeAI/ldm/models/diffusion/ddpm.py", line 1440, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/home/genevera/InvokeAI/ldm/models/diffusion/ddpm.py", line 2148, in forward
    out = self.diffusion_model(x, t, context=cc)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/home/genevera/InvokeAI/ldm/modules/diffusionmodules/openaimodel.py", line 798, in forward
    emb = self.time_embed(t_emb)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/opt/conda/envs/invokeai/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)
    >> Could not generate image.
    >> Usage stats:
    >>   0 image(s) generated in 0.48s
    >>   Max VRAM used for this generation: 1.07G. Current VRAM utilization: 0.48G
    >>   Max VRAM used since script start:  7.25G

Expected behavior an image is generated

Additional context This happens on both linux w/ cuda and macos w/ mps. I think there is just a missing .to() call somewhere

lstein commented 2 years ago

I'm surprised you're seeing this on main, which has been heavily tested. Are you sure you're on the branch you think you are? At some point might you have been moving from branch to branch and have installed changes from one that are persisting with the other?

genevera commented 2 years ago

Hi @lstein -

I'm surprised to see it in main, too! Have you not been able to reproduce it?

Wrt. branches and such, ¯\_(ツ)_/¯ :

(invokeai) genevera@instance-2:~/InvokeAI$ git branch
* main
(invokeai) genevera@instance-2:~/InvokeAI$ git diff origin/main
(invokeai) genevera@instance-2:~/InvokeAI$ echo $?
0
(invokeai) genevera@instance-2:~/InvokeAI$ git remote -v
origin  https://github.com/invoke-ai/InvokeAI.git (fetch)
origin  https://github.com/invoke-ai/InvokeAI.git (push)
(invokeai) genevera@instance-2:~/InvokeAI$ git fetch
remote: Enumerating objects: 126, done.
remote: Counting objects: 100% (104/104), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 126 (delta 83), reused 96 (delta 78), pack-reused 22
Receiving objects: 100% (126/126), 428.92 KiB | 9.53 MiB/s, done.
Resolving deltas: 100% (85/85), completed with 41 local objects.
From https://github.com/invoke-ai/InvokeAI
   7a923be..99d23c4  development -> origin/development
(invokeai) genevera@instance-2:~/InvokeAI$ git log
commit e4ed0943e2d87b59d8f4f482e8d1fdb962ba82ea (HEAD -> main, origin/main, origin/HEAD)
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Oct 20 20:41:42 2022 +0800

    Fixes indentation causing rendering issue with github.io page

commit 4b95c422bde493bf7eb3068c6d3473b0e85a1179
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Oct 20 02:43:43 2022 -0400

    Fix typo in subheading!

commit d7ddbf6f7541b92f7034f27ced80ae8209ba4621
Author: Eric Wolf <19wolf@gmail.com>
Date:   Wed Oct 19 12:34:54 2022 -0400

    Fix discord link

    The discord badge has the correct link but the quick links did not

commit 367cbd47e64c31172c401d3e2c2600d7af8d6135
Author: Jan Skurovec <jan@skurovec.cz>
Date:   Wed Oct 19 08:54:47 2022 +0200

    fix for 'model is not defined' when loading embedding

commit 90d37eac034592cc3aed5a15a98971801b21988e (tag: v2.0.2)
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 18 16:00:59 2022 -0400

    update requirements to address #1149

commit 230de023ffa0cd7e78f9e45e406b74c45c6a7dfa
Merge: febf86d e6fc8af
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 18 08:27:33 2022 -0400

    resolve doc conflicts during merge

commit e6fc8af2496e7081e2f83107a47c3097f9819436
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 18 08:08:58 2022 -0400

    Fix typo

    Taken from `main` PR #1147
    Author: eltociear

commit febf86dedf33b24d99d193775d778c54aa8ec3d4
Merge: 76ae17a 2db4969
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 18 13:26:03 2022 +0200

    Merge branch 'fix-gh-actions' of github.com:mauwii/stable-diffusion into fix-gh-actions

commit 76ae17abac2751220e902fd75e83f528645988da
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 17 04:43:06 2022 +0200

    update cache steps
    remove restore-keys, make keys uniuqe

commit 339ff4b4644ef7a35b21b3641a95d6309cdd4689
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 17 04:02:38 2022 +0200

    fix conda pkg cache name
    also change content of hashFile-function

commit 00c0e487dd6be4eb61a55de33e8dc639e2a9b17e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 17 03:27:15 2022 +0200

    move export behind the tests, upload with artifact
    also switch to python between 3.9-3.10 and use conda-forge again
psychedelicious commented 1 year ago

Hi @genevera , sorry for the late followup here. Have you recently updated (e.g. to v2.1) and does this issue still occur for you? Also, we will imminently release v2.2 - but I have a feeling this may be a deeper issue if it still exists...

tjennings commented 1 year ago

on head now, seeing this as well.

╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /seek_art/invokeai/./scripts/compute_node.py:227 in <module> │ │ │ │ 224 │ │ │ main_loop(gr, models, host, client_uuid, api_token, pool) │ │ 225 │ │ 226 │ │ ❱ 227 main() │ │ 228 │ │ │ │ /seek_art/invokeai/./scripts/compute_node.py:224 in main │ │ │ │ 221 │ │ │ 222 │ with ThreadPoolExecutor(max_workers=pool_size) as pool: │ │ 223 │ │ while True: │ │ ❱ 224 │ │ │ main_loop(gr, models, host, client_uuid, api_token, pool) │ │ 225 │ │ 226 │ │ 227 main() │ │ │ │ /seek_art/invokeai/./scripts/compute_node.py:192 in main_loop │ │ │ │ 189 │ │ │ │ │ │ steps=image['inference_steps'], cfg_scale=f │ │ 190 │ │ │ │ │ │ width=512, height=512, embiggen_strength=fl │ │ 191 │ │ │ │ │ │ skip_normalize=True, log_tokenization=False │ │ ❱ 192 │ │ │ │ gr.apply_postprocessor(temp.name, tool='embiggen', cal │ │ 193 │ │ │ │ │ │ │ │ │ opt=opt) │ │ 194 │ │ │ else: │ │ 195 │ │ │ │ Opts = recordclass('Opts', 'seed prompt skip_normalize │ │ │ │ /seek_art/invokeai/ldm/generate.py:660 in apply_postprocessor │ │ │ │ 657 │ │ │ generator = self.select_generator(embiggen=True) │ │ 658 │ │ │ opt.strength = opt.embiggen_strength or 0.40 │ │ 659 │ │ │ print(f'>> Setting img2img strength to {opt.strength} for │ │ ❱ 660 │ │ │ generator.generate( │ │ 661 │ │ │ │ prompt, │ │ 662 │ │ │ │ sampler = self.sampler, │ │ 663 │ │ │ │ steps = opt.steps, │ │ │ │ /seek_art/invokeai/ldm/invoke/generator/embiggen.py:38 in generate │ │ │ │ 35 │ │ with scope(self.model.device.type), self.model.ema_scope(): │ │ 36 │ │ │ for n in trange(iterations, desc='Generating'): │ │ 37 │ │ │ │ # make_image will call Img2Img which will do the equiv │ │ ❱ 38 │ │ │ │ image = make_image() │ │ 39 │ │ │ │ results.append([image, seed]) │ │ 40 │ │ │ │ if image_callback is not None: │ │ 41 │ │ │ │ │ image_callback(image, seed) │ │ │ │ /seek_art/invokeai/ldm/invoke/generator/embiggen.py:352 in make_image │ │ │ │ 349 │ │ │ │ newinitimage = 2.0 * newinitimage - 1.0 │ │ 350 │ │ │ │ newinitimage = newinitimage.to(self.model.device) │ │ 351 │ │ │ │ │ │ ❱ 352 │ │ │ │ tile_results = gen_img2img.generate( │ │ 353 │ │ │ │ │ prompt, │ │ 354 │ │ │ │ │ iterations = 1, │ │ 355 │ │ │ │ │ seed = seed, │ │ │ │ /seek_art/invokeai/ldm/invoke/generator/base.py:93 in generate │ │ │ │ 90 │ │ │ │ │ │ print('** An error occurred while getting init │ │ 91 │ │ │ │ │ │ print(traceback.format_exc()) │ │ 92 │ │ │ │ │ │ ❱ 93 │ │ │ │ image = make_image(x_T) │ │ 94 │ │ │ │ │ │ 95 │ │ │ │ if self.safety_checker is not None: │ │ 96 │ │ │ │ │ image = self.safety_check(image) │ │ │ │ /seek_art/invokeai/ldm/invoke/generator/img2img.py:52 in make_image │ │ │ │ 49 │ │ │ │ noise=x_T │ │ 50 │ │ │ ) │ │ 51 │ │ │ # decode it │ │ ❱ 52 │ │ │ samples = sampler.decode( │ │ 53 │ │ │ │ z_enc, │ │ 54 │ │ │ │ c, │ │ 55 │ │ │ │ t_enc, │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/autograd/gra │ │ d_mode.py:27 in decorate_context │ │ │ │ 24 │ │ @functools.wraps(func) │ │ 25 │ │ def decorate_context(*args, **kwargs): │ │ 26 │ │ │ with self.clone(): │ │ ❱ 27 │ │ │ │ return func(*args, **kwargs) │ │ 28 │ │ return cast(F, decorate_context) │ │ 29 │ │ │ 30 │ def _wrap_generator(self, func): │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/sampler.py:365 in decode │ │ │ │ 362 │ │ │ │ xdec_orig = self.q_sample(x0, ts) # TODO: determinist │ │ 363 │ │ │ │ x_dec = xdec_orig * mask + (1.0 - mask) * x_dec │ │ 364 │ │ │ │ │ ❱ 365 │ │ │ outs = self.p_sample( │ │ 366 │ │ │ │ x_dec, │ │ 367 │ │ │ │ cond, │ │ 368 │ │ │ │ ts, │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/autograd/gra │ │ d_mode.py:27 in decorate_context │ │ │ │ 24 │ │ @functools.wraps(func) │ │ 25 │ │ def decorate_context(*args, **kwargs): │ │ 26 │ │ │ with self.clone(): │ │ ❱ 27 │ │ │ │ return func(*args, **kwargs) │ │ 28 │ │ return cast(F, decorate_context) │ │ 29 │ │ │ 30 │ def _wrap_generator(self, func): │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/ddim.py:58 in p_sample │ │ │ │ 55 │ │ else: │ │ 56 │ │ │ # step_index counts in the opposite direction to index │ │ 57 │ │ │ step_index = step_count-(index+1) │ │ ❱ 58 │ │ │ e_t = self.invokeai_diffuser.do_diffusion_step( │ │ 59 │ │ │ │ x, t, │ │ 60 │ │ │ │ unconditional_conditioning, c, │ │ 61 │ │ │ │ unconditional_guidance_scale, │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/shared_invokeai_diffusion.py:88 in │ │ do_diffusion_step │ │ │ │ 85 │ │ elif wants_cross_attention_control: │ │ 86 │ │ │ unconditioned_next_x, conditioned_next_x = self.apply_cros │ │ 87 │ │ else: │ │ ❱ 88 │ │ │ unconditioned_next_x, conditioned_next_x = self.apply_stan │ │ 89 │ │ │ │ 90 │ │ # to scale how much effect conditioning has, calculate the cha │ │ 91 │ │ scaled_delta = (conditioned_next_x - unconditioned_next_x) * u │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/shared_invokeai_diffusion.py:104 in │ │ apply_standard_conditioning │ │ │ │ 101 │ │ x_twice = torch.cat([x] * 2) │ │ 102 │ │ sigma_twice = torch.cat([sigma] * 2) │ │ 103 │ │ both_conditionings = torch.cat([unconditioning, conditioning]) │ │ ❱ 104 │ │ unconditioned_next_x, conditioned_next_x = self.model_forward_ │ │ 105 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ 106 │ │ return unconditioned_next_x, conditioned_next_x │ │ 107 │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/ddim.py:13 in <lambda> │ │ │ │ 10 │ │ super().__init__(model,schedule,model.num_timesteps,device) │ │ 11 │ │ │ │ 12 │ │ self.invokeai_diffuser = InvokeAIDiffuserComponent(self.model, │ │ ❱ 13 │ │ │ │ │ │ │ │ │ │ │ │ │ │ model_forwa │ │ 14 │ │ │ 15 │ def prepare_to_sample(self, t_enc, **kwargs): │ │ 16 │ │ super().prepare_to_sample(t_enc, **kwargs) │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/ddpm.py:1441 in apply_model │ │ │ │ 1438 │ │ │ x_recon = fold(o) / normalization │ │ 1439 │ │ │ │ 1440 │ │ else: │ │ ❱ 1441 │ │ │ x_recon = self.model(x_noisy, t, **cond) │ │ 1442 │ │ │ │ 1443 │ │ if isinstance(x_recon, tuple) and not return_ids: │ │ 1444 │ │ │ return x_recon[0] │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/m │ │ odule.py:1130 in _call_impl │ │ │ │ 1127 │ │ # this function, and just call forward. │ │ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._ │ │ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │ │ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │ │ 1131 │ │ # Do not call functions when jit is used │ │ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /seek_art/invokeai/ldm/models/diffusion/ddpm.py:2167 in forward │ │ │ │ 2164 │ │ │ out = self.diffusion_model(xc, t) │ │ 2165 │ │ elif self.conditioning_key == 'crossattn': │ │ 2166 │ │ │ cc = torch.cat(c_crossattn, 1) │ │ ❱ 2167 │ │ │ out = self.diffusion_model(x, t, context=cc) │ │ 2168 │ │ elif self.conditioning_key == 'hybrid': │ │ 2169 │ │ │ cc = torch.cat(c_crossattn, 1) │ │ 2170 │ │ │ xc = torch.cat([x] + c_concat, dim=1) │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/m │ │ odule.py:1130 in _call_impl │ │ │ │ 1127 │ │ # this function, and just call forward. │ │ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._ │ │ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │ │ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │ │ 1131 │ │ # Do not call functions when jit is used │ │ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /seek_art/invokeai/ldm/modules/diffusionmodules/openaimodel.py:798 in │ │ forward │ │ │ │ 795 │ │ t_emb = timestep_embedding( │ │ 796 │ │ │ timesteps, self.model_channels, repeat_only=False │ │ 797 │ │ ) │ │ ❱ 798 │ │ emb = self.time_embed(t_emb) │ │ 799 │ │ │ │ 800 │ │ if self.num_classes is not None: │ │ 801 │ │ │ assert y.shape == (x.shape[0],) │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/m │ │ odule.py:1130 in _call_impl │ │ │ │ 1127 │ │ # this function, and just call forward. │ │ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._ │ │ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │ │ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │ │ 1131 │ │ # Do not call functions when jit is used │ │ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/c │ │ ontainer.py:139 in forward │ │ │ │ 136 │ # with Any as TorchScript expects a more precise type │ │ 137 │ def forward(self, input): │ │ 138 │ │ for module in self: │ │ ❱ 139 │ │ │ input = module(input) │ │ 140 │ │ return input │ │ 141 │ │ │ 142 │ def append(self, module: Module) -> 'Sequential': │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/m │ │ odule.py:1130 in _call_impl │ │ │ │ 1127 │ │ # this function, and just call forward. │ │ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._ │ │ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks │ │ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │ │ 1131 │ │ # Do not call functions when jit is used │ │ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /seek_art/conda/envs/invokeai/lib/python3.9/site-packages/torch/nn/modules/l │ │ inear.py:114 in forward │ │ │ │ 111 │ │ │ init.uniform_(self.bias, -bound, bound) │ │ 112 │ │ │ 113 │ def forward(self, input: Tensor) -> Tensor: │ │ ❱ 114 │ │ return F.linear(input, self.weight, self.bias) │ │ 115 │ │ │ 116 │ def extra_repr(self) -> str: │ │ 117 │ │ return 'in_features={}, out_features={}, bias={}'.format( │ ╰──────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm) ERROR conda.cli.main_run:execute(47):conda run python -u ./scripts/compute_node.pyfailed. (See above for error)

tjennings commented 1 year ago

sorry, not sure how to fix the formatting. I know that's horrendous

TheBarret commented 1 year ago

I too get this exception on 2.2.0, funny is it happens after a second prompt in the Unified editor, did a clean install.

I posted my bug at: https://github.com/invoke-ai/InvokeAI/issues/1843

psychedelicious commented 1 year ago

@tjennings

What model are you using? Does standard 1.5 work?