Closed jasgeo75 closed 1 year ago
Connect two or three AI nodes together, give them rules in how and when to reply to one another. Then connect them together and ask one a question. You get about 2 messages in before prompt error floods. Are they meant to be linkable this way or have I missed something?
This is an early implementation, really appreciate the details/ feedback!
I would start with 2 Ai nodes. There is still work to be accomplished to make the Ai node message ecosystem scale. I will look into it for the next update.
There are a number of known issues with this newer feature still. Messages can be halted, but disconnecting ai nodes does not assure one extra message will be prevented from sending. There should be a flag to prevent messages from being sent while the ai is currently responding. Instead, they stack and send once the ai stops responding. (Needs a better system for identifying which ai nodes the messages are received from.)
One issue you could be running into is going over the token limit. If that is the issue, try lowering the context size in the Ai tab. Otherwise, I will be looking into this feature more since it is still in an early state.
What is the error you are getting?
Cool, I've tried with two, but I get the same response once they start conversing:
An error occurred while processing your request.
Prompt:
An error occurred while processing your request.
An error occurred while processing your request.
An error occurred while processing your request.
Prompt:
An error occurred while processing your request.
Prompt:
An error occurred while processing your request.
An error occurred while processing your request
The only way I've been able to stop them is to close both AI nodes, each goes into a loop.
This is like 2 or 3 messages in and appears in both AI nodes. It seems more like rate limiting but I can't be sure. Maybe some sort of race condition? I'll try to debug it this evening. Does anything change in terms of the roles (ex: 'Assistant' or 'Human') when you connect two together?
A nice to have, and I know that it's early, but it would be great to be able to specify which LLM/model to use for each AI node and select which tools that each node has access to.
I will look into replicating this in the morning! I have had similar issues. I think it is due to the token count limit being an approximate rather than exact measure. The console log may display the API token limit error from OpenAi. There a few ways to fix this, but the easiest would be for me to set safer default limits for context cutoff. OpenAi's memory module update in November may also make a lot of that easier.
All that changes when two Ai nodes are connected is that they get an extra system message telling them they are connected to other Ai nodes, and to end their response with a question that will be sent to all connected Ai nodes.
The token limit issue is related to each Ai node sharing memories. However, ideally, these are the first contexts to be cut off so that any other custom instructions you have connected are preserved at longer context lenghts.
You could try using the higher token limit models.
Also, you can have different models communicate. To do so, go to the Ai tab, and check the checkbox next to the install local Ai button. (you do not need to install the local Ai unless you want to use it) Once this checkbox is checked, each Ai node will have a dropdown that allows you to select between OpenAi, and any Local models for each Ai node. (At some point each Ai node could have an entire settings interface for what tools/model selection etc.) This still only has an OpenAi option for each Ai node, which means you are still limited to using one OpenAi model as determined by your OpenAi model selection in the Ai tab. This could be easily changed in the next update to allow for each OpenAi model to be selected for each Ai node. For now, any local models as determined by the Ai node dropdown can interact with your globally selected (in the Ai tab) OpenAi model, (select OpenAi in the Ai node dropdown that appears within each Ai node after checking the localAi checkbox in the Ai tab (the first checkbox)) The Ui navigation for this could be made less obscure, and I might as well just always have the model dropdown available once each Ai node can run a different OpenAi model, rather than just the global OpenAi selection or any webLLM option.
I will have some time to refine all of this over the weekend!
@jasgeo75
I am not getting that error, even after a significant number of messages. Did you have any of the tools enabled? I am interested in what you may have gotten for the error in the console log.
Testing Ai nodes now, there are definitely some adjustments to make,
The main one being making sure the Ai nodes stop responding. They seem to be getting stuck in a loop that not even halting the response resolves. In fact, it seems like the clicks are being processed even after the node is deleted.
Also need to prompt the Ai to ask more interesting questions as GPT 3 currently overly defers to the user for guidance on topic, and still misses some of the connected context. This is much better with GPT-4 of course.
Connect two or three AI nodes together, give them rules in how and when to reply to one another. Then connect them together and ask one a question. You get about 2 messages in before prompt error floods. Are they meant to be linkable this way or have I missed something?