Clybius / ComfyUI-Extra-Samplers

A repository of extra samplers, usable within ComfyUI for most nodes.
BSD 3-Clause "New" or "Revised" License
63 stars 10 forks source link

device error :Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) #6

Open ultranationalism opened 6 months ago

ultranationalism commented 6 months ago

`Error occurred when executing SamplerCustomModelMixtureDuo:

Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

File "F:\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "F:\ComfyUI\custom_nodes\ComfyUI-Extra-Samplers\nodes.py", line 474, in sample samples = sample_mixture(model, model2, noise, cfg, cfg2, sampler, sampler2, sigmas, sigmas2, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, callback2=callback2, disable_pbar=disable_pbar, seed=noise_seed) File "F:\ComfyUI\custom_nodes\ComfyUI-Extra-Samplers\nodes.py", line 285, in sample_mixture samples = mixture_sample(real_model, real_model2, noise, positive_copy, positive_copy2, negative_copy, negative_copy2, cfg, cfg2, model.load_device, model2.load_device, sampler, sampler2, sigmas, sigmas2, model_options=model.model_options, model_options2=model2.model_options, latent_image=latent_image, denoise_mask=noise_mask, denoise_mask2=noise_mask2, callback=callback, callback2=callback2, disable_pbar=disable_pbar, seed=seed) File "F:\ComfyUI\custom_nodes\ComfyUI-Extra-Samplers\nodes.py", line 271, in mixture_sample samples = sampler.sample(model_wrap, temp_sigmas, extra_args, callback, noise.to(device) if i is 0 else torch.zeros(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, device=device), samples if samples is not None else latent_image, denoise_mask, True) File "F:\ComfyUI\comfy\samplers.py", line 550, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "F:\ComfyUI\comfy\k_diffusion\sampling.py", line 707, in sample_dpmpp_sde_gpu return sample_dpmpp_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, r=r) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "F:\ComfyUI\comfy\k_diffusion\sampling.py", line 539, in sample_dpmpp_sde denoised = model(x, sigmas[i] s_in, extra_args) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "F:\ComfyUI\comfy\samplers.py", line 282, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "F:\ComfyUI\comfy\samplers.py", line 272, in forward return self.apply_model(args, kwargs) File "F:\ComfyUI\comfy\samplers.py", line 269, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) File "F:\ComfyUI\comfy\samplers.py", line 249, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "F:\ComfyUI\comfy\samplers.py", line 223, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "F:\ComfyUI\comfy\model_base.py", line 95, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "F:\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 838, in forward emb = self.time_embed(t_emb) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "F:\ComfyUI\comfy\ops.py", line 45, in forward return super().forward(*args, **kwargs) File "F:\stable-diffusion-webui\py310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)

`

Clybius commented 6 months ago

Does this error still occur when either adding or removing --disable-smart-memory to your ComfyUI launch parameters?

I've had a similar error which was solved by removing that, it seems like model2 will automatically move off the device if there isn't enough VRAM.

ultranationalism commented 6 months ago

Does this error still occur when either adding or removing --disable-smart-memory to your ComfyUI launch parameters?

I've had a similar error which was solved by removing that, it seems like model2 will automatically move off the device if there isn't enough VRAM.

I guess It need should auto load second model and unload first model in low memory model