cold-hand / ComfyUI-FlashFace

ComfyUI Node for FlashFace
MIT License
61 stars 3 forks source link

list object has no attribute movedim #8

Open GraftingRayman opened 4 months ago

GraftingRayman commented 4 months ago

when passing the image output onto an upscaler i get the following error

image

ChatGPT states the following:

The error message 'list' object has no attribute 'movedim' suggests that you are trying to use the movedim method on a list object, which is not supported because lists in Python do not have a movedim method. This error commonly occurs when trying to manipulate the dimensions of a list of tensors.

To resolve this issue:

Check Variable Types: Ensure that the object you are trying to manipulate with the movedim method is indeed a tensor and not a list.

Inspect Data Flow: Trace back the source of the list object and verify its origin. Ensure that the object is properly created and manipulated before attempting to use tensor-specific methods on it.

Debugging: Insert print statements or use a debugger to inspect the type and shape of the object just before the error occurs. This can help identify where the unexpected list object is coming from and why it does not have the expected tensor attributes.

Review Documentation: If you are unsure about the type of object expected at a certain point in your code, refer to the documentation or code comments to clarify the expected data types and formats.

Once you identify the root cause of the issue and correct it, you should be able to resolve the 'list' object has no attribute 'movedim' error. If you need further assistance, please provide more context or code snippets for a more specific analysis.

GraftingRayman commented 4 months ago

From what other upscalers are saying, the output from the generator is a tuple, which is not supported by upscalers. Will take a look and see if I can fix

cold-hand commented 4 months ago

Awesome, thanks!

Thater commented 4 months ago

You have to torch.cat the output images before your return them.

        torch_list = []
        for img in imgs_pil:
            img_tensor = F.to_tensor(img)
            # Ensure the data type is correct
            img_np = img_tensor.permute(1, 2, 0).unsqueeze(0)
            torch_list.append(img_np)        
        torch_imgs = torch.cat(torch_list, dim=0)

        return (torch_imgs,)
GraftingRayman commented 4 months ago

Thank you @Thater, that worked

Only issue now is, when passed onto vae encode to get some ksampler processing done, it comes up with a different error now, this happens when the vae encoder is directly connected to the generator

image

image

GraftingRayman commented 4 months ago

Will have to look at vae encoder to see what type of input it takes

GraftingRayman commented 4 months ago

So got a bit further in testing this, upscales work, vae encode works (only if a different vae is used.

To me it seems the way the model is implemented along with the vae, this is the cause of the problem here, which can be seen when running the ksampler with the model

Once the model or vae from the flash model loader is involved it craps out

image

image

GraftingRayman commented 4 months ago

Other issue I found was the javascript file in the web folder, with the dynamic input images being populated by this, it does cause issues when joining/disconnecting noodles from the detector node

Would just get rid of that altogether

cold-hand commented 4 months ago

@GraftingRayman the dynamic inputs are a bit wonky, I could remove them and allow either a single image or an image list or image batch as input instead.

GraftingRayman commented 4 months ago

just remove the dynamic input, works fine as it is without (check my screenshots regarding the masks), can use a joiner to join lots of images or load a batch, can load a list even with a list to batch node in between.

This helps troubleshooting as can see what dimensions the outputs are, with the dynamic inputs it tries to connect to the dynamic and creates a loop

cold-hand commented 4 months ago

Dynamic inputs removed