Closed alessandroperilli closed 9 months ago
its a good idea, I think the node that I used as a reference to make this already does this: https://github.com/badjeff/comfyui_lora_tag_loader
I didn't play around with it much, but I think it does just that. my issue with it was that I wanted it to display info and examples etc. please do check the above out, if it doesn't work as expected, let me know and I'll add the feature into this
My suggestion does the reverse of what the LoRA Tag Loader
does. I want to extract the trigger words from the LoRA Info
dump and place them into an otherwise empty prompt node (or a debug node for logging purposes, etc.).
I see, let me see what I can do. (hopefully I understood everything correctly. this is day #2 with generative AI for me lol)
I know. I read your intro post on Reddit. I appreciate any time you'll dedicate to this request, if and when you'll feel comfortable with the complexities of the ComfyUI world. Even in its current form, the LoRA Info node is useful, and it's scheduled to be included in my upcoming AP Workflow 6.1.
So it turns out I can't combine 'displaying' functionality and exporting values, so I created a sibling node that does the exporting.
Is this what you had in mind?
Yes, this works fine (although I think your outputs are inverted: it seems that the trigger_words
output is outputting the example prompt and vice versa).
It would be great to not have the opening and closing quotes for each output, but it's a minor issue.
However, and exclusively FYI, I think it's possible to both display and output at the same time. Look for example at the textDebug node in the ttN suite:
Yes, this works fine (although I think your outputs are inverted: it seems that the
trigger_words
output is outputting the example prompt and vice versa).However, and exclusively FYI, I think it's possible to both display and output at the same time. Look for example at the textDebug node in the ttN suite:
oh sweet, thx for the info, I'll check it out
Super. It works great! Thanks for all the time you dedicated to this during your weekend.
UPDATE:
The node generates the following error for that particular LoRA I flagged in the other issue:
File "xyz/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "xyz/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "xyz/Tools/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "xyz/ComfyUI/custom_nodes/lora-info/lora_info.py", line 118, in lora_info (output, triggerWords, examplePrompt, baseModel) = get_lora_info(lora_name) ^^^^^^^^^^^^^^^^^^^^^^^^ File "xyz/ComfyUI/custom_nodes/lora-info/lora_info.py", line 64, in get_lora_info trainedWords = ",".join(model_info.get("trainedWords")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: can only join an iterable
woops, accidently reintroduced in the refactor. I've fixed in the latest push
Now it works fine, thank you. At this point, you might consider adding your node to the DB list of ComfyUI Manager. So other people who didn't see the Reddit announcement will find it.
Now it works fine, thank you. At this point, you might consider adding your node to the DB list of ComfyUI Manager. So other people who didn't see the Reddit announcement will find it.
sweet. yeah I will soon. I'm going to investigate a custom node I saw that reacts to 'input change' without the user having to queue the workflow. I'd like for lora info to execute and show info as soon as the user selects a lora instead of having to hit queue.
I think I'm also going to add another "quality of life" node, like a 'prompt' selector. where the user can just select and use pre made prompts eg. Universal Negative Prompt
, or Face Test
that tests a workflow for face detailing accuracy, Hand Test
or something, it'd be hooked up to github so community members can add/share their prompts.
I've already created a half built node that does this for me.
Will probably end up adding more nodes like this as I build out my workflow.
I've never seen a node that automatically reacts to state changes without queuing the generation, and I've seen hundreds. But if such a node exists and can be done, it would be amazing.
Re the prompt selector, there are dozens of them. You can find them by searching for "prompt" on this page.
I don't like any of them because even if you can see the list of prompt presents from dropdown menus, you never remember exactly what's in each preset. So I ended up creating a very large visual prompt builder for the AP Workflow. I prefer to see the full extent of the prompt I'll choose before choosing it.
I've never seen a node that automatically reacts to state changes without queuing the generation, and I've seen hundreds. But if such a node exists and can be done, it would be amazing.
Re the prompt selector, there are dozens of them. You can find them by searching for "prompt" on this page.
I don't like any of them because even if you can see the list of prompt presents from dropdown menus, you never remember exactly what's in each preset. So I ended up creating a very large visual prompt builder for the AP Workflow. I prefer to see the full extent of the prompt I'll choose before choosing it.
done ;)
Now it reacts to user input and updates/displays the output and base model immediately instead of having to queue.
wrt the prompt selector and given the above update (ie. the ability to update UI elements based on user input), I can probably put together a prompt selector node that shows whats in the preset.
I was also playing around building a workflow and started to feel the need for a "Any Switch". There are some switches in threerg but they are type specific and require a boolean input. I was thinking of a node that acts like a switch with a slider switch instead of requiring an input.
so when the switch is "off" it outputs whatever is in input_a
, and when its "on" it outputs input_b
I've never seen a node that automatically reacts to state changes without queuing the generation, and I've seen hundreds. But if such a node exists and can be done, it would be amazing. Re the prompt selector, there are dozens of them. You can find them by searching for "prompt" on this page. I don't like any of them because even if you can see the list of prompt presents from dropdown menus, you never remember exactly what's in each preset. So I ended up creating a very large visual prompt builder for the AP Workflow. I prefer to see the full extent of the prompt I'll choose before choosing it.
done ;)
Now it reacts to user input and updates/displays the output and base model immediately instead of having to queue.
Remarkable. And thank you! If this can now read the selected LoRA model from an Efficient Node and a default LoRA Loader checkpoint, as we discussed, it becomes a must have for many, many people.
wrt the prompt selector and given the above update (ie. the ability to update UI elements based on user input), I can probably put together a prompt selector node that shows whats in the preset.
I was also playing around building a workflow and started to feel the need for a "Any Switch". There are some switches in threerg but they are type specific and require a boolean input. I was thinking of a node that acts like a switch with a slider switch instead of requiring an input.
so when the switch is "off" it outputs whatever is in
input_a
, and when its "on" it outputsinput_b
Rgthree has a node called Any Switch
which defaults to the first non-null value. That's my preference over all the other switches I tried so far because it requires no manual reconfiguration.
Also, in the Impact Pack
, there's a Switch (Any)
, which allows for infinite input pins and has an embedded selector (no need for a boolean value as input). That is my second most favorite switch.
wrt the prompt selector and given the above update (ie. the ability to update UI elements based on user input), I can probably put together a prompt selector node that shows whats in the preset. I was also playing around building a workflow and started to feel the need for a "Any Switch". There are some switches in threerg but they are type specific and require a boolean input. I was thinking of a node that acts like a switch with a slider switch instead of requiring an input.
so when the switch is "off" it outputs whatever is in
input_a
, and when its "on" it outputsinput_b
Rgthree has a node called
Any Switch
which defaults to the first non-null value. That's my preference over all the other switches I tried so far because it requires no manual reconfiguration.Also, in the
Impact Pack
, there's aSwitch (Any)
, which allows for infinite input pins and has an embedded selector (no need for a boolean value as input). That is my second most favorite switch.
Ah perfect, thanks!
I've been playing around with trying to get lora_name output a "non-hacky" way with no luck. I think what I'll have to do is give the user an option.
if the user selects option 2 then (because there can be multiple different loras in multiple different nodes):
selected_index
optionBefore you try either implementation, I'd suggest you check the code behind this feature:
You summon this popup by right-clicking on stock Lora Loader
node and selecting View Info
:
Your implementation is much more useful, IMO, but maybe that code could be useful in this case?
Before you try either implementation, I'd suggest you check the code behind this feature:
You summon this popup by right-clicking on stock
Lora Loader
node and selectingView Info
:Your implementation is much more useful, IMO, but maybe that code could be useful in this case?
where is this node from? None of the loaders I have show that context menu option.
It's not a property of any specific node, but a capability that applies to the stock ComfyUI LoraLoader
node when you install the Custom Scripts suite.
It would be great if the Lora Info node would be able to export just the trigger words (rather than the entire output) to an output pin. In that way, I could further manipulate and automatically insert them in other nodes.
Thank you for considering my suggestion.