JasonS09 / comfy_sd_krita_plugin

Make AI art between canvas and nodes with Krita.
MIT License
148 stars 7 forks source link

Error returning image when LoRA is used #19

Closed IanLiotta closed 1 year ago

IanLiotta commented 1 year ago

Describe the bug

When a LoRA is applied to the prompt, the image generates correctly and appears in the ComfyUI output folder. However, an error is generated and the image is not returned to the Krita canvas.

To Reproduce Steps to reproduce the behavior:

  1. Go to txt2img
  2. Create a prompt including a lora such as
  3. Start txt2img
  4. See error after inference completes

Screenshots AttributeError Python 3.8.1: C:\Program Files\Krita (x64)\bin\krita.exe Sat Sep 2 00:04:31 2023

A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.

C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py in on_image_received(img=b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x0...\xb1W\xd5\x19\xa4Z\x00\x00\x00\x00IEND\xaeB`\x82') 256 # Check if all images are in the list before sending the response to script. 257 if len(images_received) == len(images_output): 258 response = craft_response(images_output, history, names) 259 self.images_received.emit(response) 260 response undefined craft_response = <function Client.receive_images..craft_response> images_output = ['iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAA...5evP21/1m27F5RN5f/n87/gFvjVFFoQAAAABJRU5ErkJggg=='] history = {'outputs': {'SaveImage': {'images': [{'filename': 'ComfyUI00108.png', 'subfolder': '', 'type': 'output'}]}}, 'prompt': [35, 'ccd3b826-4fac-4662-980b-65f59c1b40df', {'3': {'class_type': 'KSampler', 'inputs': {'cfg': 8.0, 'denoise': 1.0, 'latent_image': ['5', 0], 'model': ['LoraLoader+1', 0], 'negative': ['7', 0], 'positive': ['6', 0], 'sampler_name': 'dpmpp_2m', 'scheduler': 'ddim_uniform', 'seed': 1924407337, 'steps': 32}}, '4': {'class_type': 'CheckpointLoaderSimple', 'inputs': {'ckpt_name': 'jamJustAnotherMerge_v18Pruned.safetensors'}}, '5': {'class_type': 'EmptyLatentImage', 'inputs': {'batch_size': 1, 'height': 512, 'width': 512}}, '6': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ' rouge the bat \n'}}, '7': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ''}}, 'ClipSetLastLayer': {'class_type': 'CLIPSetLastLayer', 'inputs': {'clip': ['4', 1], 'stop_at_clip_layer': -2}}, 'LoraLoader+1': {'class_type': 'LoraLoader', 'inputs': {'clip': ['ClipSetLastLayer', 0], 'lora_name': 'add_detail.safetensors', 'model': ['4', 0], 'strength_clip': 1.0, 'strength_model': 1.0}}, 'SaveImage': {'class_type': 'SaveImage', 'inputs': {'filename_prefix': 'ComfyUI', 'images': ['VAEDecode', 0]}}, 'VAEDecode': {'class_type': 'VAEDecode', 'inputs': {'samples': ['3', 0], 'vae': ['VAELoader', 0]}}, 'VAELoader': {'class_type': 'VAELoader', 'inputs': {'vae_name': 'anythingKlF8Anime2VaeFtMse840000_klF8Anime2.safetensors'}}}, {'client_id': '8ee98782-dcfb-4c64-9b67-173c9f32eb21'}, ['SaveImage']]} names = ['ComfyUI00108.png']

C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py in craft_response(images=['iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAA...5evP21/1m27F5RN5f/n87/gFvjVFFoQAAAABJRU5ErkJggg=='], history={'outputs': {'SaveImage': {'images': [{'filename': 'ComfyUI00108.png', 'subfolder': '', 'type': 'output'}]}}, 'prompt': [35, 'ccd3b826-4fac-4662-980b-65f59c1b40df', {'3': {'class_type': 'KSampler', 'inputs': {'cfg': 8.0, 'denoise': 1.0, 'latent_image': ['5', 0], 'model': ['LoraLoader+1', 0], 'negative': ['7', 0], 'positive': ['6', 0], 'sampler_name': 'dpmpp_2m', 'scheduler': 'ddim_uniform', 'seed': 1924407337, 'steps': 32}}, '4': {'class_type': 'CheckpointLoaderSimple', 'inputs': {'ckpt_name': 'jamJustAnotherMerge_v18Pruned.safetensors'}}, '5': {'class_type': 'EmptyLatentImage', 'inputs': {'batch_size': 1, 'height': 512, 'width': 512}}, '6': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ' rouge the bat \n'}}, '7': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ''}}, 'ClipSetLastLayer': {'class_type': 'CLIPSetLastLayer', 'inputs': {'clip': ['4', 1], 'stop_at_clip_layer': -2}}, 'LoraLoader+1': {'class_type': 'LoraLoader', 'inputs': {'clip': ['ClipSetLastLayer', 0], 'lora_name': 'add_detail.safetensors', 'model': ['4', 0], 'strength_clip': 1.0, 'strength_model': 1.0}}, 'SaveImage': {'class_type': 'SaveImage', 'inputs': {'filename_prefix': 'ComfyUI', 'images': ['VAEDecode', 0]}}, 'VAEDecode': {'class_type': 'VAEDecode', 'inputs': {'samples': ['3', 0], 'vae': ['VAELoader', 0]}}, 'VAELoader': {'class_type': 'VAELoader', 'inputs': {'vae_name': 'anythingKlF8Anime2VaeFtMse840000_klF8Anime2.safetensors'}}}, {'client_id': '8ee98782-dcfb-4c64-9b67-173c9f32eb21'}, ['SaveImage']]}, names=['ComfyUI00108.png']) 229 return { 230 "info": { 231 "prompt": history["prompt"][2][DEFAULT_NODE_IDS["ClipTextEncode_pos"]]["inputs"]["text"].strip() + add_loras_from_history(history), 232 "negative_prompt": history["prompt"][2][DEFAULT_NODE_IDS["ClipTextEncode_neg"]]["inputs"]["text"], 233 "sd_model": history["prompt"][2][DEFAULT_NODE_IDS["CheckpointLoaderSimple"]]["inputs"]["ckpt_name"], history = {'outputs': {'SaveImage': {'images': [{'filename': 'ComfyUI00108.png', 'subfolder': '', 'type': 'output'}]}}, 'prompt': [35, 'ccd3b826-4fac-4662-980b-65f59c1b40df', {'3': {'class_type': 'KSampler', 'inputs': {'cfg': 8.0, 'denoise': 1.0, 'latent_image': ['5', 0], 'model': ['LoraLoader+1', 0], 'negative': ['7', 0], 'positive': ['6', 0], 'sampler_name': 'dpmpp_2m', 'scheduler': 'ddim_uniform', 'seed': 1924407337, 'steps': 32}}, '4': {'class_type': 'CheckpointLoaderSimple', 'inputs': {'ckpt_name': 'jamJustAnotherMerge_v18Pruned.safetensors'}}, '5': {'class_type': 'EmptyLatentImage', 'inputs': {'batch_size': 1, 'height': 512, 'width': 512}}, '6': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ' rouge the bat \n'}}, '7': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ''}}, 'ClipSetLastLayer': {'class_type': 'CLIPSetLastLayer', 'inputs': {'clip': ['4', 1], 'stop_at_clip_layer': -2}}, 'LoraLoader+1': {'class_type': 'LoraLoader', 'inputs': {'clip': ['ClipSetLastLayer', 0], 'lora_name': 'add_detail.safetensors', 'model': ['4', 0], 'strength_clip': 1.0, 'strength_model': 1.0}}, 'SaveImage': {'class_type': 'SaveImage', 'inputs': {'filename_prefix': 'ComfyUI', 'images': ['VAEDecode', 0]}}, 'VAEDecode': {'class_type': 'VAEDecode', 'inputs': {'samples': ['3', 0], 'vae': ['VAELoader', 0]}}, 'VAELoader': {'class_type': 'VAELoader', 'inputs': {'vae_name': 'anythingKlF8Anime2VaeFtMse840000_klF8Anime2.safetensors'}}}, {'client_id': '8ee98782-dcfb-4c64-9b67-173c9f32eb21'}, ['SaveImage']]} global DEFAULT_NODE_IDS = {'CLIPVisionEncode': 'CLIPVisionEncode', 'CLIPVisionLoader': 'CLIPVisionLoader', 'CheckpointLoaderSimple': '4', 'ClipSetLastLayer': 'ClipSetLastLayer', 'ClipTextEncode_neg': '7', 'ClipTextEncode_pos': '6', 'ControlNetApplyAdvanced': 'ControlNetApplyAdvanced', 'ControlNetImageLoader': 'ControlNetImageLoader', 'ControlNetLoader': 'ControlNetLoader', 'EmptyLatentImage': '5', ...} ].strip undefined add_loras_from_history = <function Client.receive_images..add_loras_from_history>

C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py in add_loras_from_history(history={'outputs': {'SaveImage': {'images': [{'filename': 'ComfyUI00108.png', 'subfolder': '', 'type': 'output'}]}}, 'prompt': [35, 'ccd3b826-4fac-4662-980b-65f59c1b40df', {'3': {'class_type': 'KSampler', 'inputs': {'cfg': 8.0, 'denoise': 1.0, 'latent_image': ['5', 0], 'model': ['LoraLoader+1', 0], 'negative': ['7', 0], 'positive': ['6', 0], 'sampler_name': 'dpmpp_2m', 'scheduler': 'ddim_uniform', 'seed': 1924407337, 'steps': 32}}, '4': {'class_type': 'CheckpointLoaderSimple', 'inputs': {'ckpt_name': 'jamJustAnotherMerge_v18Pruned.safetensors'}}, '5': {'class_type': 'EmptyLatentImage', 'inputs': {'batch_size': 1, 'height': 512, 'width': 512}}, '6': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ' rouge the bat \n'}}, '7': {'class_type': 'CLIPTextEncode', 'inputs': {'clip': ['LoraLoader+1', 1], 'text': ''}}, 'ClipSetLastLayer': {'class_type': 'CLIPSetLastLayer', 'inputs': {'clip': ['4', 1], 'stop_at_clip_layer': -2}}, 'LoraLoader+1': {'class_type': 'LoraLoader', 'inputs': {'clip': ['ClipSetLastLayer', 0], 'lora_name': 'add_detail.safetensors', 'model': ['4', 0], 'strength_clip': 1.0, 'strength_model': 1.0}}, 'SaveImage': {'class_type': 'SaveImage', 'inputs': {'filename_prefix': 'ComfyUI', 'images': ['VAEDecode', 0]}}, 'VAEDecode': {'class_type': 'VAEDecode', 'inputs': {'samples': ['3', 0], 'vae': ['VAELoader', 0]}}, 'VAELoader': {'class_type': 'VAELoader', 'inputs': {'vae_name': 'anythingKlF8Anime2VaeFtMse840000_klF8Anime2.safetensors'}}}, {'client_id': '8ee98782-dcfb-4c64-9b67-173c9f32eb21'}, ['SaveImage']]}) 217 node_inputs = nodes[f"{node_name}+{lora_loader_count}"]["inputs"] 218 while True: 219 lora_name = node_inputs["lora_name"].removesuffix(".safetensors") 220 lora_weight = node_inputs["strength_model"] 221 output += f"\n<lora:{lora_name}:{lora_weight}>" lora_name undefined node_inputs = {'clip': ['ClipSetLastLayer', 0], 'lora_name': 'add_detail.safetensors', 'model': ['4', 0], 'strength_clip': 1.0, 'strength_model': 1.0} ].removesuffix undefined AttributeError: 'str' object has no attribute 'removesuffix' cause = None class = <class 'AttributeError'> context = None delattr = <method-wrapper 'delattr' of AttributeError object> dict = {} dir = doc = 'Attribute not found.' eq = <method-wrapper 'eq' of AttributeError object> format = ge = <method-wrapper 'ge' of AttributeError object> getattribute = <method-wrapper 'getattribute' of AttributeError object> gt = <method-wrapper 'gt' of AttributeError object> hash = <method-wrapper 'hash' of AttributeError object> init = <method-wrapper 'init' of AttributeError object> init_subclass = <built-in method init_subclass of type object> le = <method-wrapper 'le' of AttributeError object> lt = <method-wrapper 'lt' of AttributeError object> ne = <method-wrapper 'ne' of AttributeError object> new = reduce = reduce_ex = <built-in method reduce_ex of AttributeError object> repr = <method-wrapper 'repr' of AttributeError object> setattr = <method-wrapper 'setattr' of AttributeError object> setstate = sizeof = str = <method-wrapper 'str' of AttributeError object> subclasshook = __suppress_context = False traceback__ = args = ("'str' object has no attribute 'removesuffix'",) with_traceback =

The above is a description of an error in a Python program. Here is the original traceback:

Traceback (most recent call last): File "C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py", line 258, in on_image_received response = craft_response(images_output, history, names) File "C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py", line 231, in craft_response "prompt": history["prompt"][2][DEFAULT_NODE_IDS["ClipTextEncode_pos"]]["inputs"]["text"].strip() + add_loras_from_history(history), File "C:\Users\mix\AppData\Roaming\krita\pykrita\krita_comfy\client.py", line 219, in add_loras_from_history lora_name = node_inputs["lora_name"].removesuffix(".safetensors") AttributeError: 'str' object has no attribute 'removesuffix'

Desktop (please complete the following information):

JasonS09 commented 1 year ago

Hey! It seems you're using an older version of this plugin. This bug has already been resolved, please git pull your local repository.

IanLiotta commented 1 year ago

You're right, my mistake.