xXAdonesXx / NodeGPT

ComfyUI Extension Nodes for Automated Text Generation.
GNU Affero General Public License v3.0
336 stars 24 forks source link

TextGeneration node seems to be ignoring the cache setting #20

Open alessandroperilli opened 11 months ago

alessandroperilli commented 11 months ago

I'm using the TextGeneration node of your suite. It works great, but I'm not certain it follows the cache = false setting. Or, perhaps, I don't understand the intended behavior.

Screenshot 2023-11-06 at 12 07 59

The behavior I would want, or expect from cache = false is that, even if the submitted input remains identical, the node will still invoke the LLM. Instead, if nothing changes in the system (including the input to the node), the node doesn't do anything, regardless of the setting for the cache value.

The only way for me to trigger a new generation is either by killing ComfyUI, or changing the cache value from false to true or from true to false. Irrespective of the actual setting, the change of state forces a new LLM generation.

RaelynLockwood commented 10 months ago

@alessandroperilli I struggled with this for a little bit, until I learned that ComfyUI itself doesn't rerun nodes if all the inputs are the same, usually this is very efficient. So it's not on the custom node. What I ended up doing to resolve this was add a small random seed to the end of my system message, which caused the inputs to change. Messy, but it worked.

alessandroperilli commented 10 months ago

@RaelynLockwood Despite how I described the issue, I'm quite confident that my ComfyUI seed was changing at every new generation. But I'll double-check, just in case I completely missed this obvious issue.

Re your workaround: if you add a random seed at the end of the system message, won't it be picked up by the LLM, influencing the generation of the response? Or did you write a system message that explicitly asks the LLM to ignore the seed it sees at the end?

Thank you

RaelynLockwood commented 10 months ago

The ComfyUI seed doesn't matter. The point is that the node will only be rerun when its inputs change. The node's inputs are normally just the model and the system message, it doesn't have a seed input.

My work around, just to be a lil more clear, was converting the system message to an input, and then appending a random seed to the end of it. So the system was, "You're a joke maker bot. Random Seed: 621342". From what I found, the chatbot seemed to ignore the random seed, though it did receive the message. If you wanted to invest a little more time in it, you could code in a seed input on the node, and that'd probably fix this issue.

Just explaining why it is what it is. :)

On Mon, Dec 4, 2023 at 4:42 AM Alessandro Perilli @.***> wrote:

@RaelynLockwood https://github.com/RaelynLockwood Despite how I described the issue, I'm quite confident that my ComfyUI seed was changing at every new generation. But I'll double-check, just in case I completely missed this obvious issue.

Re your workaround: if you add a random seed at the end of the system message, won't it be picked up by the LLM, influencing the generation of the response? Or did you write a system message that explicitly asks the LLM to ignore the seed it sees at the end?

Thank you

— Reply to this email directly, view it on GitHub https://github.com/xXAdonesXx/NodeGPT/issues/20#issuecomment-1838278484, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7JVORRDND6YVJZGPFIWAO3YHWSJBAVCNFSM6AAAAAA67M5MD2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZYGI3TQNBYGQ . You are receiving this because you were mentioned.Message ID: @.***>

alessandroperilli commented 10 months ago

@RaelynLockwood Interesting that the LLM ignored the seed at the end of the message. I'll give it a try. Thank you!