Closed rossaai closed 4 months ago
Thanks for the input! A couple of things you might have missed about current functionality:
output/anynode
folder and it's a json file which holds a backup of all of your functions by prompt md5 hashed. This will also tie into the chromadb store you might have noticed in the same folder for example recall on working functions. The registry only holds functions that have executed properly.function_registry
json files from output with your workflows to let others use your functionality without LLMs. Not only doe that save you cost, it makes your node free for those you send the workflow bundle to.All of this is getting refined and is in a state of flux right now as we are super-early in this project (8 days into development)
Node Export is on the roadmap.. I'm currently making sure the design of the system for this is correct, and ties into the rest of the planned nodes. There are a lot of planned features and all of them come from users like yourself who care about the project and want to contribute, so thanks a lot! It turns out, you are on the same page as everyone else :)
I Have a whole output of the planned features I'm working on where people can give their opinion in the discord: https://discord.com/channels/1244702057714421881/1244940700747825182/1244990597442310234 ... it's just me coding this so I have to take the features and order/prioritize them, but I'm getting through the list!
You're right about documentation. I wish I had more time, but it's been a week since the start of this and most of my time has been devoted to coding and making a few videos. I will work on getting the documentation better and work on making more videos that are informative and clear.
Noted and closed.
Is there a way to run specific Python code in a node? For example, the output of any node, copy and paste it into a Python Any Node to run that code or other code that I wrote up
I've been looking for a solution to run Python on a node for weeks.
As LLMs are non-deterministic, there are instances where they may not function properly at one moment but spontaneously resolve the issue at another. This inconsistency can be problematic, especially when an LLM node generates an incredibly effective solution to a problem, but later fails to reproduce the same result.
To address this issue, it would be beneficial to have a feature that allows users to construct a node using the AnyNode and then reuse or copy and paste the generated logic for utilization in other workflows without the need to make additional calls to the LLM server.
This feature would be different from caching the response, as it focuses on reusing the generated logic in any workflow by simply copying it, similar to running Python code directly in a node.
Implementing this feature would provide several benefits:
To implement this feature, consider the following steps: