The basic reasoning problem should not have objects already known since its one more thing to worry about for an LLM that already cannot make plans. If we can make this an option (see https://github.ibm.com/Tathagata-Chakraborti1/nl2flow-ui-backend/issues/3), that would be great. When the generation request is with True, then there can be items in memory. If it's False, then we don't have anything in memory.
I think, in terms of code, the only change would be to NOT add all the remaining items to memory, after allocation to the parameters. If this produces an error inside NL2Flow (saying unrecognized variables) let me know.
Minimum Requirements:
[x] Mark Object's memory state as unknown when should_objects_known_in_memory=False
[x] Remove available_data from json when should_objects_known_in_memory=False
[x] Remove description about available_data when should_objects_known_in_memory=False
[x] Generator handles the should_objects_known_in_memory field
The basic reasoning problem should not have objects already known since its one more thing to worry about for an LLM that already cannot make plans. If we can make this an option (see https://github.ibm.com/Tathagata-Chakraborti1/nl2flow-ui-backend/issues/3), that would be great. When the generation request is with True, then there can be items in memory. If it's False, then we don't have anything in memory.
I think, in terms of code, the only change would be to NOT add all the remaining items to memory, after allocation to the parameters. If this produces an error inside NL2Flow (saying unrecognized variables) let me know.
Minimum Requirements:
should_objects_known_in_memory=False
should_objects_known_in_memory=False
available_data
whenshould_objects_known_in_memory=False
should_objects_known_in_memory
field