kijai / ComfyUI-LuminaWrapper

MIT License
157 stars 6 forks source link

LLM mode? #22

Open yamatazen opened 1 week ago

yamatazen commented 1 week ago

What does the LLM mode do?

kijai commented 1 week ago

It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.

The idea is that since Gemma is generic LLM model, it can be used for other things than encoding embeds for Lumina. I have this node to experiment with, possible use cases could be prompt improvement etc. image

yamatazen commented 1 week ago

Text generation in ComfyUI?

sdk401 commented 1 week ago

I was actually going to suggest the same thing, as long as we have the model loaded, is it possible to use it to enhance prompt for Lumina.

It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.

Can you explain, what do you mean "we can just use same model"? I would like to use that function, but as you removed the node, I can't use "gemma_model" output in anything but "lumina gemma text encode" node. Can I use some other LLM node to load gemma and use it to encode instead of your loader?

kijai commented 1 week ago

I was actually going to suggest the same thing, as long as we have the model loaded, is it possible to use it to enhance prompt for Lumina.

It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.

Can you explain, what do you mean "we can just use same model"? I would like to use that function, but as you removed the node, I can't use "gemma_model" output in anything but "lumina gemma text encode" node. Can I use some other LLM node to load gemma and use it to encode instead of your loader?

I mean I initially thought you can't use the "GemmaForCausalLM" -model from transformers for Lumina, which is why I made the switch, but turns out it does work the same as the "GemmaModel" that the transformers "AutoModel" assigns it, which was how it was in the original Lumina code.

And by model I mean the model object in the code, the weights were always the same.

Anyway all this means just that we can simply use the same loader node for both purposes. Are you saying you don't have the GemmaSampler -node? image

sdk401 commented 1 week ago

Are you saying you don't have the GemmaSampler -node?

Yeah, could not find it. Updated and refreshed anything and now I see it. Thanks!

yamatazen commented 6 days ago

Where does the Show Text node come from?

kijai commented 6 days ago

Where does the Show Text node come from?

https://github.com/pythongosssss/ComfyUI-Custom-Scripts