ltdrdata / ComfyUI-Workflow-Component

This is a side project to experiment with using workflows as components.
GNU General Public License v3.0
219 stars 9 forks source link

Image Refiner doesn't work after ComfyUI's update. #19

Closed MaciejStann closed 1 year ago

MaciejStann commented 1 year ago

Hi, I've been using the manual inpainting workflow, as it's quick, handy and awesome feature, but after update of ComfyUI (Updating all via Manager?) doesn't work anymore, also options we've had before i.e mask-detailer.ir are not visible options to choose from. I'm using Pinokio to run the ComfyUI and here is the error code:

..........Connected!

model_type EPS

adm 2816

making attention of type 'vanilla-pytorch' with 512 in_channels

Working with z of shape (1, 4, 32, 32) = 4096 dimensions.

making attention of type 'vanilla-pytorch' with 512 in_channels

missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}

loading new

Error handling request

Traceback (most recent call last):

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\env\lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request

resp = await request_handler(request)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\env\lib\site-packages\aiohttp\web_app.py", line 504, in _handle

resp = await handler(request)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\env\lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl

return await handler(request)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\[server.py](https://server.py/)", line 43, in cache_control

response: web.Response = await handler(request)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate

result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\[imagerefiner.py](https://imagerefiner.py/)", line 171, in generate

input_data_all = prepare_input(class_def, merged_pil, mask_pil, prompt_data)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\[imagerefiner.py](https://imagerefiner.py/)", line 94, in prepare_input

model, clip, vae = load_checkpoint(v['checkpoint'])

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\[imagerefiner.py](https://imagerefiner.py/)", line 38, in load_checkpoint

model, clip, vae, _ = comfy_nodes.CheckpointLoaderSimple().load_checkpoint(ckpt_name)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\[nodes.py](https://nodes.py/)", line 446, in load_checkpoint

out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\comfy\[sd.py](https://sd.py/)", line 1331, in load_checkpoint_guess_config

sd = utils.load_torch_file(ckpt_path)

File "C:\Users\mikas\pinokio\api\comfyui.pinokio.git\ComfyUI\comfy\[utils.py](https://utils.py/)", line 10, in load_torch_file

if ckpt.lower().endswith(".safetensors"):

AttributeError: 'NoneType' object has no attribute 'lower'

Any help is appreciated, thanks!

ltdrdata commented 1 year ago

I didn't registered workflow component to default channel. So you cannot update through Manager, simply.

If you are willing to update workflow component. There are 2 ways.

Goto ComfyUI-Workflow-Component dir in cmd. and

git pull

or

change Manager's channel to dev channel and update all. Don't forget change back to default channel.

MaciejStann commented 1 year ago

Thanks for such quick reply! I've tried at first the thing with changing the channel, it found that wasn't up to date, then updated but still i didn't fix the problem. So i did the git pull, but after doing this, the option to use the refiner on the image has gone so i can't use it anymore

image
ltdrdata commented 1 year ago

Thanks for such quick reply! I've tried at first the thing with changing the channel, it found that wasn't up to date, then updated but still i didn't fix the problem. So i did the git pull, but after doing this, the option to use the refiner on the image has gone so i can't use it anymore

image

What is displayed on browser console log? If you are chrome. Press F12.

Oh before than what messages are displayed on ComfyUI terminal?

MaciejStann commented 1 year ago
image

Here is a chrome console screenshot.

image

I will share a console in a moment as im launching official portable version of comfyui instead of the one with Pinokio.

1 Edit. Now I can see the "mask-detailer" as before but still it doesn't work.

image
C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
** ComfyUI start up time: 2023-09-25 13:18:27.687936

Prestartup times for custom nodes:
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 8192 MB, total RAM 65277 MB
xformers version: 0.0.21
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
VAE dtype: torch.bfloat16
Using xformers cross attention
Error:
[WinError 1314] Klient nie ma wymaganych uprawnień: 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyLiterals\\js' -> 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\web\\extensions\\ComfyLiterals'
Failed to create symlink to C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals. Please copy the folder manually.
Source: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals\js
Target: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals
### Loading: ComfyUI-Impact-Pack (V4.9.6)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.2.2)
### Loading: ComfyUI-Inspire-Pack (V0.17)
### Loading: ComfyUI-Manager (V0.30.5)
### ComfyUI Revision: 1483 [76cdc809] | Released on '2023-09-23'
### Loading: ComfyUI-Workflow-Component (V0.42.1) !! WARN: This is an experimental extension. Extremely unstable. !!
Registered sys.path: ['C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\__init__.py', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\custom_pycocotools', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\custom_oneformer', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\custom_mmpkg', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\custom_midas_repo', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\custom_detectron2', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src\\controlnet_aux', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\src', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\comfy', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\lib\\site-packages\\git\\ext\\gitdb', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\python310.zip', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\lib\\site-packages', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\lib\\site-packages\\win32', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\lib\\site-packages\\win32\\lib', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\python_embeded\\lib\\site-packages\\Pythonwin', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Impact-Pack\\modules', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Impact-Pack\\impact_subpack', '../..', 'C:\\Users\\mikas\\OneDrive\\Pulpit\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Workflow-Component']
Fooocus combined KSampler: loaded

Import times for custom nodes:
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\canvas_tab
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SeeCoder
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fooocus_KSampler
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\images-grid-comfy-plugin-main
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
   0.0 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component
   0.3 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
   1.2 seconds: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
Error handling request
Traceback (most recent call last):
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request
    resp = await request_handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_app.py", line 504, in _handle
    resp = await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
    return await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\server.py", line 46, in cache_control
    response: web.Response = await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate
    result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 174, in generate
    input_data_all = prepare_input(class_def, merged_pil, mask_pil, prompt_data)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 94, in prepare_input
    model, clip, vae = load_checkpoint(v['checkpoint'])
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 38, in load_checkpoint
    model, clip, vae = comfy_nodes.CheckpointLoaderSimple().load_checkpoint(ckpt_name)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\nodes.py", line 476, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 396, in load_checkpoint_guess_config
    sd = comfy.utils.load_torch_file(ckpt_path)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 12, in load_torch_file
    if ckpt.lower().endswith(".safetensors"):
AttributeError: 'NoneType' object has no attribute 'lower'
FETCH DATA from: C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
loading new
[DBG] work: 17 (MaskToSEGS) / worklist: []
# of Detected SEGS: 1
[DBG] work: 31 (DetailerForEachPipe) / worklist: []
Detailer: force inpaint
Detailer: segment upscale for ((737, 327)) | crop region (768, 768) x 1.0 -> (768, 768)
loading new
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\workflow_component\execution_experimental.py", line 159, in exception_helper
    task()
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\workflow_component\execution_experimental.py", line 372, in task
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\workflow_component\execution_experimental.py", line 88, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\workflow_component\execution_experimental.py", line 81, in map_node_over_list
    results.append(getattr(obj, func)(**params))
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 297, in doit
    DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 204, in do_detail
    enhanced_pil, cnet_pil = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 230, in enhance_detail
    refined_latent = ksampler_wrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 47, in ksampler_wrapper
    nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fooocus_KSampler\sampler\Fooocus\patch.py", line 296, in sampling_function_patched
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat,
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fooocus_KSampler\sampler\Fooocus\patch.py", line 266, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 60, in apply_model
    context = context.to(dtype)
AttributeError: 'list' object has no attribute 'to'

Prompt executed in 2.60 seconds
ERROR: Output slot '9' in '## mask-detailer.ir [c22ce6][-1]' doesn't provide any value.
12
Error handling request
Traceback (most recent call last):
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request
    resp = await request_handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_app.py", line 504, in _handle
    resp = await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\python_embeded\lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
    return await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\server.py", line 46, in cache_control
    response: web.Response = await handler(request)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate
    result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 180, in generate
    return process_output(class_def, output_data)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 154, in process_output
    image_pil = tensor2pil(output)
  File "C:\Users\mikas\OneDrive\Pulpit\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 13, in tensor2pil
    return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8))
AttributeError: 'NoneType' object has no attribute 'cpu'
ltdrdata commented 1 year ago

This is fixed. 99ee0aa79362ce71471b084ff2a3d216d3d39714

ltdrdata commented 1 year ago

update again... some of components were remaind as broken.