Open yamatazen opened 1 week ago
It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.
The idea is that since Gemma is generic LLM model, it can be used for other things than encoding embeds for Lumina. I have this node to experiment with, possible use cases could be prompt improvement etc.
Text generation in ComfyUI?
I was actually going to suggest the same thing, as long as we have the model loaded, is it possible to use it to enhance prompt for Lumina.
It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.
Can you explain, what do you mean "we can just use same model"? I would like to use that function, but as you removed the node, I can't use "gemma_model" output in anything but "lumina gemma text encode" node. Can I use some other LLM node to load gemma and use it to encode instead of your loader?
I was actually going to suggest the same thing, as long as we have the model loaded, is it possible to use it to enhance prompt for Lumina.
It's something I was experimenting with, turns out it's unnecessary and we can just use same model for both functions, so I removed the option now.
Can you explain, what do you mean "we can just use same model"? I would like to use that function, but as you removed the node, I can't use "gemma_model" output in anything but "lumina gemma text encode" node. Can I use some other LLM node to load gemma and use it to encode instead of your loader?
I mean I initially thought you can't use the "GemmaForCausalLM" -model from transformers for Lumina, which is why I made the switch, but turns out it does work the same as the "GemmaModel" that the transformers "AutoModel" assigns it, which was how it was in the original Lumina code.
And by model I mean the model object in the code, the weights were always the same.
Anyway all this means just that we can simply use the same loader node for both purposes. Are you saying you don't have the GemmaSampler -node?
Are you saying you don't have the GemmaSampler -node?
Yeah, could not find it. Updated and refreshed anything and now I see it. Thanks!
Where does the Show Text node come from?
Where does the Show Text node come from?
What does the LLM mode do?